A new paradigm for evaluating IC yield loss
10/01/2001
Yield/productivity SPECIAL REPORT
Dennis Ciplickas, Sherry F. Lee, Andrzej J. Strojwas PDF Solutions Inc., San Jose, California
overview
Once largely a function of critical area, yield loss is increasingly dependent on critical features such as contacts and vias. Characterizing this type of yield loss is a challenge, due to the extremely small failure rates required to achieve a high yielding product. There is, however, a comprehensive new method for modeling all dominant yield loss mechanisms that uses integrated test chips and a new paradigm based on precise modeling of individual yield loss mechanisms. A matrix is thus created that details these mechanisms by block, layer, and type.
The most important yield loss mechanisms in VLSICs can be classified into several categories. Typically, process-related yield losses dominate and are caused by factors such as misprocessing (e.g., equipment-related problems), systematic effects (e.g., printability problems), or random defects.
High-performance ICs may exhibit design marginalities; they are not sufficiently robust to withstand either process fluctuations or environmental factors (e.g., supply voltage or temperature). Test-related yield losses caused by incorrect testing can also play a significant role.
Figure 1. Wafers/week vs. months to full production for various technology nodes clearly show that IC production ramps are accelerating. |
At previous technology nodes, random defects caused by particle contamination were the dominant yield loss mechanism. For example, such random defects typically accounted for 60% of yield loss in a mature 0.35µm process. This has changed significantly for 0.18µm and smaller technologies, where random defects typically contribute <45%. At these nodes, incompatibilities between design and process primarily systematic in nature are critical because of increased IC and process complexities and their often-subtle interactions.
Figure 2. Area-based yield modeling remains relatively constant as CDs shrink, while the feature-based yield decreases. |
Moreover, increasing focus on consumer markets has shrunk time-to-volume for leading-edge products. Here, the classic illustration is that it took more than six years to sell the first million PCs, but it took only two days to sell the first million Sony PlayStation IIs. This kind of demand often means that there is insufficient time to integrate products and tune processes during product yield ramp.
With shorter ramps of successive IC technology nodes (Fig. 1), there is less time to bring leading-edge products to volume production. As a result, yield loss due to process integration issues has been increasing and systematic yield loss has been significant. Fortunately, a major fraction of these yield losses can be identified and then recovered by accurately modeling key yield loss mechanisms.
When yield loss was dominated by random defects, yield was typically modeled as a function of defect density and chip critical area [1]. Several models have been developed from a simple Poisson negative binomial to more complicated critical-area-based models that take into account defect size distributions. The systematic yield loss component in these models was typically <5%.
Yield models that are only a function of chip critical area are not sufficient for advanced IC technologies. A more comprehensive set of design attributes must be introduced to predict yield adequately, especially in early production stages. These attributes may include the number of contacts to polysilicon and source-drain regions of NMOS and PMOS transistors, or the number of vias between metal layers, including stacked vias. These design attributes or features can be extracted from a GDS-II layout by software tools [2].
In our comparison of area-based vs. feature-based yields per technology node (Fig. 2), we made these assumptions: yield loss is driven by polysilicon, metal, contacts, and vias; defect densities on polysilicon and metal are improved by 55%/node; failure rates/contact and via are improved by 43%/node; and chip area increases by 10%/node.
The data in Fig. 2 show that with 0.18µm volume manufacturing, critical feature-based yield losses start to dominate. This is in contrast to area-based yield loss, which has been kept relatively constant by continual improvement through the efforts of equipment and process engineers; here, increasing metal layers have caused a slight decrease in yield/generation.
Figure 3. Allowable technology-node integration failures/billion contacts or vias. |
Via or contact problems predominantly manifest themselves as opens. If vias and contacts are not redundant, opens result in a faulty IC. It is, however, extremely difficult to diagnose this particular yield loss mechanism, especially in a completed IC. Only by failed bit analysis of memory ICs or embedded memory can this identification be made. Typically it must be confirmed by failure analysis techniques, such as examination of cross-sections by SEM or TEM. Another critical failure mode with advanced technology is the quality and continuity of the salicide interconnect layer formed by a reaction between a metal or silicide and the underlying polysilicon. This defect causes a very high resistance along a signal path.
The allowable systematic failure rates dominated by contact-via or salicide problems for 0.18µm technology and below are <3/billion vias (Fig. 3). Typical advanced ICs have more than 1 billion vias/wafer, so achieving these failure rates requires <3 via failures on an entire 200mm wafer. As discussed above, determining failure rates from finished ICs is very difficult, but knowledge of these rates is critical for yield estimation and diagnosis of dominant yield loss mechanisms.
Yield loss mechanism determination
In previous technology nodes, engineers used integrated test chips to characterize a manufacturing process and build yield models. Unfortunately, these test chips are no longer adequate, since they do not contain sufficient features to estimate the required single-digit-per-billion failure rate. We have developed special Characterization Vehicles, (CV, see "Not your father's test chip") that focus on a particular set of features (e.g., metal 1 to metal 2 vias) or a particular process module (e.g., STI salicide) [3]. These are specially designed test structures that characteriz both systematic and random defects, and CD variation on critical layers. In addition to providing a statistically significant sample size, test structures must reflect product layout attributes to predict product yields effectively.
Figure 4. A yield impact matrix process modules vs. product blocks for DRAM technology. |
By analyzing CV data along with previously extracted product design features, we can determine all necessary failure rates [4] (e.g., for contact-via opens and also for random defect characteristics) to predict actual product yield. We can then present in a yield impact matrix (YIMP) a detailed yield loss breakdown/layer and root cause mechanisms [5]. The details of limited yield modeling approaches for individual yield loss mechanisms are beyond the scope of this article [6].
The main benefit of our yield impact prediction methodology is that it quantifies yield issues for a particular product or set of products. This allows for better prioritization of fab resources, more accurate production planning and scheduling, and, ultimately, faster time-to-volume manufacturing.
The key difference between YIMP and traditional methods is that YIMP takes product design into account. Normally, fabs use yield of test vehicles to find process modules that are causing problems. For example, if the yield of stacked via structures is significantly lower than the yield of contact structures, the conclusion may be that there is a problem with stacked vias and resources are put on this issue. YIMP may draw a different conclusion, however, because it accounts for the impact on the overall product yield, rather than simply the yield of an individual process step or module.
Consider a case where a test vehicle has the same number of stacked vias and contacts, and the stacked vias have a lower test structure yield than the contacts. Typical products have 100 to 1000x more individual contacts than stacked vias. This means that the impact of the contacts on product will be much greater than stacked vias.
Therefore, even if test structure yield of stacked vias is lower than that of contacts, the product-limited yield of stacked vias may be higher than contacts, indicating more yield loss of product due to contacts. YIMP analysis results suggest placing more resources on determining the root cause of the contact issue, in contrast to a conclusion made by using standard test vehicles, which would point to the stacked via problem.
Information from YIMP analysis can be used to create a Pareto chart of yield loss mechanisms for a particular product manufactured with a given fabrication process. Efforts can then be prioritized to focus on improving yield by improving the process module (e.g., salicide), modifying layout design rules, or even modifying product design (e.g., reducing critical path delay by inserting buffers and repeaters).
Consider an analysis for a 64Mb embedded DRAM where YIMP analysis was used to determine systematic yield gaps (Fig. 4). We had two questions. Is the overall yield more limited by DRAM or logic? Which layers most affect the yield?
Figure 4 shows how we used yield impact matrices to predict yields broken down by DRAM and logic blocks based on estimates of failure rates from our CVs already run in the fab we were evaluating. In this design, DRAM core cells are reparable while the periphery is not. Therefore, in the yield impact matrix, the core and DRAM blocks are shown with and without repair. The limited yield of each module (e.g., metal 1, via 2, etc.) for layer shorts and hole opens are shown in Fig. 4. The total yield of each block is the product of the limited yields for each module.
We can make a few key observations and conclusions from the yield impact matrix in Fig. 4. It is apparent that failures in the DRAM-specific self-aligned contacts (i.e., poly 2) will affect unrepaired core yield. Most of these cells can be repaired successfully, however.
Therefore, while the unrepaired DRAM yield is lower than that of the logic block (i.e., 5.16% unrepaired DRAM vs. 50.35% logic block), once repair is taken into account, we see that the logic block limits the product yield, so the yield impact matrix guides us to focus on the logic block.
Figure 5. A yield impact matrix process modules vs. logic blocks for logic blocks. |
In addition, the logic block shows a high sensitivity to the via-1 module. With further analysis, we break down the logic block into its individual blocks (Fig. 5). Blocks 1 and 3 show much lower overall expected block yield than other blocks due to the via-1 module because of via border size (see Fig. 5 insert). Our CV analysis has determined that failure rate for borderless vias is almost 40x larger than for bordered vias. For most blocks, this difference in failure rate did not adversely affect yield because those blocks had a small percentage of borderless vias. For blocks 1 and 3, however, a significant portion of vias were borderless. Consequently, the higher failure rate due to borderless vias resulted in significant yield loss.
Overall, in this example, YIMP focused our improvement efforts by identifying the logic block over the DRAM block as the larger contributor to product yield loss. Furthermore, the process-design interaction of vias was found to be the root cause of the lower logic yield, adversely affecting two of the logic blocks. To increase logic yield, the recommendation was therefore to increase borders where possible (i.e., where metal design rules are not violated) and to use CVs to investigate process modifications that would make the process more robust to the border design.
Conclusion
Increasing design and process complexities have helped shift the dominant cause of yield loss from a function of critical area to a function of critical features. Combined with increasing time to volume pressures, these trends have made traditional methods of ramping new products and processes less effective. To address this need, we have developed a system of methods and tools that can accurately model today's principal yield loss mechanisms so that a major fraction of yield losses can be identified and recovered.
Acknowledgments
The term "Characterization Vehicles" is a trademark of PDF Solutions Inc.
References
- D.J. Ciplickas, et al., "Advanced Yield Learning Through Predictive Micro-Yield Modeling," Proc. ISSM 1998, Tokyo, Oct. 1998.
- pdEx Users Manual, PDF Solutions Inc., 1998.
- A.J. Strojwas, "Test Structures for VLSIC Yield Ramp Maximization," ICMTS Tutorial, Monterey, CA, March 2000.
- C. Hess, et al., "Distribution Using a Single Layer Short Flow NEST Structure," ICMTS Digest, Monterey, CA, March 2000.
- Patents pending.
- D.J. Ciplickas, X. Li, A.J. Strojwas, "Predictive Yield Modeling of VLSICs," Proc. of IWSM 2000, Honolulu, HA, June 2000.
Dennis Ciplickas is director of engineering for yield and performance modeling at PDF Solutions Inc., 333 W. San Carlos St. #700, San Jose, CA 95110; ph 408/938-6407, fax 408/280-7915, e-mail [email protected].
Sherry Lee is engagement director at PDF Solutions.
Andrzej Strojwas is a Joseph F. and Nancy Keithley professor of electrical and computer engineering at Carnegie Mellon University and chief technologist at PDF Solutions.
_________________________________
Not your father's test chip
Characterization Vehicles (CV) differ from traditional test chips in several ways:
- Layout DOE The number and variation of test structures placed on a CV is dictated by a design of experiments (DOE) that is generated after extracting critical feature statistics from layouts of real products. Software automates the task of sifting through hundreds of gigabytes of layout files in product libraries and cores to extract these statistics [2]. As a result, a CV is much more likely to suffer the same yield loss mechanisms as product being manufactured.
- Process DOE The manufacturing process recipe is studied and simulated to help produce a lot split plan for using a CV to determine process marginality, especially for hard-to-manufacture features. During this phase of CV design, tradeoffs may be made with layout DOE to ensure results are statistically significant. CV can be used with a process of record (POR) as a line monitor for production, or in planned lot splits to determine design rules or help debug issues during process integration and yield ramp.
- Analysis software for product yield prediction The CV is tightly coupled with software that statistically analyzes 50-100 megabytes of results generated by the vehicle to compute failure rates of critical features. Rates are used in yield models combined in a yield impact matrix to predict final product yield. Since most CVs run in short-flow loops, this creates an extremely valuable feedback mechanism for quickly and cheaply debugging and correcting yield loss.
The CV methodology has benefited from several years of application in real process integration and yield ramp projects involving aggressive technology nodes and the most complicated high-performance processor, memory, and SOC designs in production today. The CV methodology considers varying requirements of yield improvement work throughout the technology and product life cycle.
For example, early in technology development, the primary purposes of CVs are to derive design rules, optimize process specifications, and provide basic look-ahead information (i.e., fail rates, defect densities, etc.) for technology yield and volume planning. During process transfer and initial product yield ramps, however, CVs must accurately assess the stability of well-defined design rules and process specs to allow separation of process-dominant vs. product-dominant yield loss mechanisms. Finally, during volume manufacturing, CVs must provide regular and accurate measurement of baseline process yield and insight into critical process modules. Unlike the design of traditional test chips, CV design is expected to evolve and to be managed as an integral part of the normal transitions that take place during the technology life cycle.
A similar approach is used to ensure efficient CV testing and analysis turn-around time across various engineering situations. For example, process characterization requires longer test times to allow for reasonable statistics of process performance across pattern and process variation. Problem diagnosis requires faster test time, visibility into only specific mechanisms, and very fast analysis turn-around time. Finally, baseline yield monitoring requires the fastest possible test and analysis times, but only limited visibility into selected issues. The CV DOE is exploited to make the best tradeoffs possible in each of these situations.