Circuit analysis with process variation: a DFM performance metric
04/01/2007
The primary problem with committing to today’s nanometer DFM solutions is the lack of understanding of the net benefit provided by this new generation of EDA tools. DFM tools developed to aid the designer in layout improvements for manufacturability come with both a cost of software as well as increased hours in design time. The need for product development managers to balance this increased cost with increased performance margin is at the heart of deciding what tools are needed for new design flows. This article describes a methodology that calculates a mathematical score directly related to yield. The score delivers a metric to the layout designer that will help improve a circuit for manufacturability.
The return on investment (ROI) calculation for DFM tool introduction is difficult to quantify, as was emphasized in the DAC-2006 DFM panel discussion entitled, “DFM: Where is the proof?” [1]. The ultimate goal and benefit of DFM is to improve yields. Currently, many variations of yield models that describe yield fallout due to random defects are implemented by the major foundry players. These models are constantly being improved to adapt to new process technologies, but tend to comprehend only the random defect contributions to yield. The realization is that as we push into the nanometer technologies, systematic defects, as opposed to random defects, are becoming the dominant yield limiters.
New software tools are available that can be used to produce yield-related scoring of systematic defects by modeling and simulating the effects on circuit functionality due to wafer process variation. Applying both the existing random defect analysis and the new systematic analysis will provide the designer the necessary feedback to determine the most satisfactory or optimal layout for the design.
A new feature emerging with 65nm development is the ability for designers to simulate the process variation effects of lithography and etch over various circuit layout configurations. Using these simulation outputs, a SPICE netlist can be extracted for each of several combinations of process conditions. Simulation of the desired circuit with these extracted SPICE netlists allows the designer to accurately perform timing and power analysis across the silicon process window. Using the simulated circuit performance results in conjunction with the actual fab process control distributions, a scoring system has been created to evaluate the relative impact on yield of multiple process layout configurations due to systematic process variation.
Current approach
For random defect yield characterization, probabilistic models based on Poisson, Murphy, negative binomial, and other equations have been developed. These models comprehend the cumulative effect of each process layer to account for the entire process flow. Output of the model is a yield-related score based on defect density estimations from the foundry.
The original methodology was crude because design sensitivity was not considered. The current technology can take into account design layout using critical area analysis (CAA) tools. These CAA tools take into account line spacing and widths and defect size distributions (provided by foundries) to improve the yield model calculations. Additionally, layouts can be modified based on feedback from this scoring system to reduce susceptibility to random particle defects.
Figure 1. Ring oscillator frequency surface response plot of polysilicon variation. |
Other DFM tools that automatically modify layouts to improve manufacturability are also becoming available. Improved density-fill tools for CMP control, via-enclosure tools, wire-modification tools, and via-doubling tools are just some of the new EDA offerings that improve layouts to both systematic and random yield-limiting effects of manufacturing. However, an important problem with these automated layout tools is that there is no scoring system to determine the ROI of using them. Users are limited to the CAA results to estimate the potential gain.
Systematic approach
EDA vendors are now providing systematic process-modeling capabilities to designers for improved DFM-based design flows. One of the first offerings developed in this new generation of tools is lithography and etch simulation, based on existing reticle enhancement techniques (RET). Designers can now view physical shape changes of the original layout resulting from the simulation of lithography and etch processing, based on foundry models. Design rules can be created and used to flag potential weak points in the resulting simulated layout.
Lithography and etch processing distributions representing the extent of actual foundry variations can be added to the simulator as well, allowing designers to view the entire range of physical shape changes to the layout that result from the full range of foundry process variation. Designers can then improve the robustness of the layout to physical shape alteration due to the silicon process window by iteratively changing the layout of the flagged regions and re-simulating until those regions are no longer flagged as weak.
Another use of these tools is to simulate discrete and parasitic devices extracted from the layout. These device extractions can also be done to correspond to specific choices of process parameters. Using the device extractions, the designer can simulate functional performance parameters of the circuit such as timing and power over the expected silicon process window during the design phase of a product.
Current functional simulation techniques (not utilizing layout simulation) assume that all discrete components will vary uniformly across the design, which does not represent what actually happens during silicon processing. Running layout simulation will take into account spatial context variation in lithography and etch processes, and so the subsequent device extractions will be dependent on spatial context.
One can extract a multidimensional design of experiment (DOE)-based best-fit surface response equation to quantitatively relate the significant silicon process variables to functional performance parameters of interest such as timing or power. Then, using standard Monte Carlo methodology, the distributions for silicon process variables are related through the response surface equation to give the expected distribution of the functional performance parameter, and thus a functional yield-related score is derived for a particular layout.
Data requirements
Data needed to complete this task include process simulation models for the lithography and etch processes, statistical process control (SPC) limits for those processes, and timing and power budget allowances for the circuit under analysis. Typically, Gaussian statistics from the fab SPC process can quantify the process variation. Circuit simulation budgets are typically determined at floor planning project stage, prior to layout.
Limitations of this type of analysis are based on the quality of model inputs. The process models will affect the contours generated, which are used to extract physical parameters. The equations used to reduce the contours into SPICE model input parameters must be silicon validated. SPICE models must be altered to remove the constants used to adjust for process variation, to prevent double counting.
The scoring structure is constrained by the number of coverage points in each processing variable dimension. The more points used, the more accurate the response curve will be, but at a cost of run time to calculate more process conditions. If one wants to accelerate the analysis time by constraining the run to just the critical timing paths, these paths must be identified in layout first.
Scoring structure
Outputs of the Monte Carlo analysis are histograms of probable outcomes for both timing and power. Based on the limits placed in the histogram, the percent of outcomes within the range of acceptability will be the score. There will be a score for each surface response curve calculated with the analysis. The lowest score of all of the responses will be the final score for the particular layout.
If multiple scores resulting from different layout variations of the circuit are available, the highest final score will be the best layout for this analysis. Since the score is proportional to yield, the highest score is most desirable.
Example
To illustrate the use of this new type of scoring analysis, an inverter ring oscillator (RO) circuit was created with 65nm design rules. The functional performance parameters of interest in this example are RO frequency and RO power. The base layout is a 31-stage inverter circuit organized in a minimum spacing array for 30 inverters, with the 31st stage being in an isolated condition.
During production, each RO circuit will experience slightly different silicon processing conditions due to each silicon process variable randomly varying around its mean over time. These process variations will result in RO frequency and power variations around some mean RO frequency and power values. For the sake of this illustration, the four most significant silicon process variables that impact the RO performance are assumed to be: polysilicon focus, polysilicon dose, active focus, and active dose.
The first task is to generate a response surface for RO frequency and RO power over the 4D reduced process variable space. The number of unique combinations of poly and active focus and dose process variants used in this example is 49. Actually, there is a mean value, a high value, and a low value for each of the four process variables that can be used to simulate the RO layout, giving 81 possible unique process combinations of which a subset of 49 spanning the extremes from high to low were selected for this example.
Taking advantage of the new EDA technology for simulating litho and etch effects, improved shape parameters (length/width) were extracted for each transistor in the RO circuit for each of the 49 process conditions. Combining this technology with the existing SPICE netlist tools, 49 SPICE netlists for the RO circuit were derived. Then using a standard SPICE modeling tool, frequency performance and power were calculated for each of the 49 silicon process conditions.
For this example, the high and low values used for the extreme values of each of the four process variables represents ±2σ variation from the mean of each process variable respectively. By using a DOE style response surface analysis for four dimensions, two least squares best fit polynomial response functions can be derived: one for RO frequency and another for RO power, each as a function of polynomials of the four process variables as follows:
Freq(Ad, Af, Pd, Pf ) = (9.08e8) + (-2.3e6)Ad + (1.22e7)Pd + (-3.54e6)Af2 + (3.92e6)Pf2
Power(Ad, Af, Pd, Pf ) = (8.28e-5) + (-4.04e-7)Ad + (9.17e-7)Pd + (-1.0e-6)Af2 + (1.41e-7)Pf2
Process variables in the response functions are in units of sigma values for each process variable. Figure 1 shows the RO frequency response surface over the polysilicon process variables only (where a process variable value of zero represents the mean of that variable), while holding the active process variables at their mean values. A similar plot can be generated for power. These equations can be derived from standard math packages.
The second task is to generate the expected distributions for RO frequency and power responses, given that each of the four silicon process variables are assumed to be normal (Gaussian) random variables, and the response surfaces described above quantitatively relate every simultaneous choice of values of the four process variables to the responses. A value for each of the four process variables is generated using a standard Monte Carlo normal distribution random number generator sequentially four times, and one response value for RO frequency and RO power is calculated. This is repeated as many times as desired, and a set of response values for RO frequency and power is created.
figure 2. Frequency distribution from a Monte Carlo simulation. |
In this example, 3000 Monte Carlo RO frequency and power values were generated. Plotting these response values in histogram format generates the expected distributions for RO frequency and power responses due to process variations. Placing limits of maximum and minimum acceptable frequency and power on these histograms allows the determination of the percent of ROs produced that fall within the desired range (Figs. 2 and 3). This calculated percentage is the score of this circuit layout. This Monte Carlo analysis can also be done with standard math packages.
Figure 3. Power distribution from a Monte Carlo simulation. |
By altering the circuit layout in an attempt to reduce the effects of process variation on key transistors, the analysis can be run again to determine which layout is better for manufacturability. Once the SPICE analysis is done for each of the 49 process conditions, doing the equation derivations and Monte Carlo analysis can be accomplished in minutes.
Results
In our ring oscillator circuit, we achieved histograms for both frequency and power, as shown in Figs. 2 and 3, respectively. For frequency, 98.4% of expected production material is within acceptable limits (Fig. 2). For power, 99.2% of expected production material will fall within the range of acceptability (Fig. 3). This would mean that the score will be 98.4% for our original design.
By redesigning the layout to improve the robustness of the gate region to process variation, we should be able to improve this score. This is the primary goal of this new technology being developed by the EDA community. Also, by doing this type of DFM work at early cell and block level design, one can ensure that skew analysis at higher levels of design will be successful.
Integration with DFM flow
By calculating a yield-related score on systematic defect analysis, one can evaluate various layout possibilities. Balancing multiple constraints of the DFM design flow can be accomplished. Many DFM tools deliver contradictory messages as to improvement opportunities of layout. Scores based on yield estimator equations are a possible method for comparing trade-offs among multiple DFM advantages.
By adding layout vs. timing and power analysis to the designer’s list of scores, a more defined cost model equation can be developed. Process variation can be reduced on transistors that are in the critical timing path. Devices that do not contribute to the primary timing paths can be left alone; the same applies to power analysis.
Effects of other DFM tools on timing and power can also be tested. These would include wire-bending, via-doubling, and dummyfill tools. This type of work would have to be done at the cell and block level first, as they cannot be changed at the upper levels of hierarchy. DFM improvements work best when the methodology begins at the earliest point in the design flow.
Conclusion
To realize the value of DFM investment at technologies beyond 90nm, one must account for the largest category of yield limitations: systematic defects. New EDA tools can add to the designer’s abilities to account for systematic defects due to process variation, primarily in the lithography and etch modules. Using these tools, one can calculate a mathematical score based on timing and power analysis across process variation. This score, being directly related to yield, delivers a metric to the layout designer that will help improve a circuit for manufacturability.
Reference
- DAC-2006 DFM panel discussion “DFM: Where is the proof?”.
William Graupp received his BSEE from Drexel U. and is technical marketing engineer at Mentor Graphics Co., 8005 Boeckman Road, Wilsonville, OR 97070; ph 503/685-1733, e-mail [email protected].
J. Ken Patterson received his PhD from the U. of Illinois at Urbana-Champaign in 1995 and is a member of the technical staff at Avago Technologies.