Issue



Improve productivity at nm nodes with faster physical verification


02/01/2008







EXECUTIVE OVERVIEW

The increasing number and complexity of design rules result in a significant increase in design rule check (DRC) run times, many more DRC violations, and an increase in the intricacy of these violations that makes them more difficult to analyze and fix. A method is described that enables designers to meet their tape-out schedules by shortening the verification cycle time and reducing the number of full chip iterations required late in the design flow.

Physical verification sign-off is one major roadblock designers must face. The size and complexity of nanometer IC design and the volume of geometric content for related layouts has skyrocketed. Rapid growth has also been seen in the number of DRCs needed for each new process node. Designers are forced to wait for lengthy DRC runs to end before beginning the challenging and time-consuming work of analyzing and correcting the violations, and then must wait through another lengthy DRC run to determine if the violations were corrected. Ensuring design intent is maintained becomes progressively more difficult as chips get larger and process nodes become more complex. The time required to find and fix layout errors in these designs can rapidly overrun production schedules and delivery dates.

Physical verification

The nature of physical verification has undergone dramatic changes over the past several years. At 130nm, designs that were DRC-clean began failing in silicon. “Recommended” design rules were introduced to handle systematic variability, and new verification tools predicted and adjusted for manufacturing issues such as stress effects, via insertion, and metal fill. These changes, while necessary to ensure silicon success, lengthened the total time from initial design to successful tape-out by extending DRC cycle times, and increasing the number of iterations required to achieve a DRC-clean design.

At the same time, the cost of computer hardware was falling, making multi-CPU and distributed processing reasonable options for design teams, which drove further improvements in and wider use of these strategies in DRC processing. At 90nm and below, faster algorithms were developed to manage data, while the highly scalable environments paved the way for advances in parallel processing. All of these techniques reduced both memory requirements and overall DRC runtimes.

The tug of war between design expansion and process improvement continues. Despite earlier technical innovations, physical verification cycle time is still increasing as process nodes decrease. It is not uncommon to have full chip DRC runtimes of ≥6 hrs, and more than a dozen iterations before reaching DRC clean. Rule complexity at smaller process nodes also has a direct impact on debug time. These complex rules with many interdependent measurements mean that designers must spend more time debugging per error than they did with simpler checks at larger nodes.

Design teams using incremental DRC enhance their ability to find and resolve not only more yield issues, but also more complex and interdependent combinations of issues, without impacting tape-out schedules. Thus, even as process nodes decrease and complexity rises, fabs and foundries can count on receiving their tape-out files on time, enabling them to meet production goals and market delivery dates.

A new strategy for DRC

Clearly, new strategies are required. While designs are not going to get simpler or easier to implement, we can enable designers to meet their tape-out schedule by shortening the verification cycle time and reducing the number of full-chip iterations required late in the design flow.

As discussed above, speeding up data processing and/or the complete DRC run has already been addressed in multiple ways: using faster hardware, optimizing DRC commands to improve efficiency, optimizing the database for faster data processing, and improving the distributing operations on the available hardware.

Because DRC-clean is the gating factor for other design steps such as layout vs. schematic (LVS) checking, parasitic extraction, yield enhancement, etc., getting to DRC-clean faster provides more time in the design flow for these activities, and more time in the overall production flow to focus on post-tape-out activities, such as yield analysis, resolution enhancement and mask data preparation, while still meeting aggressive tape-out schedules.

By taking an incremental approach to DRC, designers can make trade-offs such as restricting the checking on their design to get shorter cycle times and more iterations per day. With incremental DRC, designers can target their checks only to those regions of their design that need checking, which can prevent unwanted checking and save hours of runtime. This approach vastly improves runtime over full-chip DRC and reduces the total number of iterations required to reach “DRC clean.”

Incremental DRC

Consider the following example: a designer uses a P&R tool to make a simple engineering change order in the design, with minimal impact on the overall design. If the design was previously DRC-clean, making this change invalidates that result and forces the design to be rechecked. Traditionally, this means a designer has to rerun the entire design just to find any violations related to the small change made.


Figure 1. Incremental DRC with parallel debugging facilitates validation of fixes.
Click here to enlarge image

Ultimately, the question a designer wants to answer at this point is, “Did I introduce an error with my modification?” With an incremental approach to DRC, the designer can instead target just the changed region and run only a minimum set of rules that were impacted by the change. This saves hours over the traditional design flow (Fig. 1).

Another key aspect of performing incremental verification is the ability to parallelize debugging with the runtime. Debugging usually requires the most time in the DRC cycle, yet up to now, the debugging functionality has been only marginally improved by DRC optimization tactics. Creating a more robust, efficient, faster debugging environment lets designers quickly identify, fix, and re-verify their designs.

If we extend the example above, after performing the short incremental verification, a designer is ready to check the entire design. In an incremental flow, results are reported immediately to the designer, as opposed to being made available only at the end of the DRC run. A designer can then immediately fix the error and validate that the fix was correct and caused no other problems. In this way, the designer performs physical design iterations concurrently with DRC processing, substantially reducing overall turnaround time.

Diagnosing LVS errors

The second phase of physical verification, LVS verification, has long been a time-consuming process, requiring intensive manual analysis of errors. A traditional LVS debug flow consists of reading cryptic textual LVS report files, followed by hand tracing of the errors in a layout editor and/or the ASCII netlists. Designers at the end of the design flow are in need of a solution that helps minimize manual intervention and cross-functional interaction, and ultimately speeds design verification (Fig. 2).


Figure 2. Graphical debugging shortens the overall debug cycle.
Click here to enlarge image

Without accurate connectivity, the best of chip designs will fail in production. At the smaller process nodes, LVS errors need to be accurately diagnosed and remedied in a fraction of the time it has traditionally taken to meet IC manufacturers’ requirements. Identifying and repairing LVS errors for complex ICs requires automated, exhaustive analysis that assures no errors are overlooked. Also critical is the ability to rapidly and accurately pinpoint exact error locations without manual deliberation. Designers are now beginning to see LVS technology that supports these identification and analysis needs.

Today, many designs are composed of multiple design components, often created from different groups, and third-party IP from different companies. It is difficult, if not impossible, to have all engineers employ a common design methodology. It is important that LVS tools rely more heavily on the electrical behavior inherent in the layout, and less on the placement of text.

Time-saving strategies necessary for today’s designs include the use of tools that ensure fast run times coupled with an integrated environment that allows the identification of errors, cross-select into the design environment to quickly define the error location, and then fix the error. Advanced LVS tools powered with intelligent algorithms provide highly accurate and precise error identification, relying less heavily on specific texting or design style methodologies.

When we look at a typical LVS run, we can see that a majority of the time is spent in the extract and debug steps. Extract is the process that loads the design file, figures out the connectivity, understands devices and performs other analysis tasks; it is all CPU runtime. As with DRC, hardware parallelization methods can be employed to reduce this runtime.

The other area where significant time is spent is the debug process. For this task, an engineer must interpret the results from LVS and determine what changes need to be made to the design to match the design intent.


Figure 3. Logic injection provides faster turnaround times.
Click here to enlarge image

One highly useful technology for LVS tools is logic injection, which automatically scans for repeated and common device patterns (Fig. 3). By recognizing repeated hierarchy in the design, LVS can simplify the repeated devices and “inject” a level of hierarchy it then uses during the comparison process. Using logic injection in the design process provides scalability, even in flat designs.


Figure 4. Smaller process nodes require innovative parameter extraction techniques.
Click here to enlarge image

With the continuing move to smaller geometries, fast, yet accurate extraction of device parameters becomes increasingly important. As high-end process nodes-from 90nm down to 65nm and 45nm-are implemented, it may be found that their designs require custom measurements for accurate extraction of all critical parameters (Fig. 4).

Simply extracting the transistor parameters of width and length are no longer sufficient. Custom parameters can include such factors as the effects of strained silicon, well proximity, and other advanced effects.

Time spent in LVS debugging can dramatically affect time-to-tape-out, but incomplete debugging can result in logic failures. A thorough scope of debugging and analysis functionality is essential, including full cross-probing of SPICE netlists, browser and netlist comparison, and identification of shorts and isolation.

Run times will improve if there is close coordination not only between the DRC and LVS tools, but also between these tools and the design environment. The synergy of a common rule deck language, along with consistent syntax and a shared environment, enables complete physical verification with maximum efficiency and accuracy.

Conclusion

Accelerating physical verification is no longer just a measure of efficient operation; it is an essential requirement for keeping up with the demands of nanometer design processing. Incremental DRC offers a new dimension of workflow parallel processing that can dramatically reduce overall DRC cycle times for nanometer IC designs. Adding in the intelligence of an effective and efficient LVS tool provides a synergistic approach to total cycle time reduction. As the industry continues to implement ever-smaller process nodes, technologies such as these combine with innovations in mask and chip production to ensure that companies can continue to maintain market position and meet production schedules.

James Paris received his BS degree in CAD engineering technology. He is a technical marketing engineer for the Calibre product line at Mentor Graphics Corp., 8005 SW Boeckman Rd., Wilsonville, OR, 97070 USA; ph 800/592-2210, e-mail [email protected].

Matthew Hogan received a B. Eng degree and an MBA. He is a marketing engineer for the Calibre product line at Mentor Graphics Corp.