Using ‘statistical dashboards’ to automate particle and yield analysis
11/01/2005
In semiconductor fabrication, optimized control over hundreds of process steps and multiple equipment sets is essential to achieving high yields in volume manufacturing. A “statistical dashboard” concept, originally created 10 years ago, has evolved with enhancements and additional algorithms to increase automatic analysis of process test results from a centralized database for particle reduction and yield improvement. Two new statistical tools have been added to the dashboard to separately identify random and systematic yield losses on product wafers for improvements in process defectivity.
During the IC manufacturing cycle, hundreds of process stages and many fabrication tools are typically involved. Due to the complexity of processes, it is very likely that differences exist between tools for a given process step. These differences can have an effect on the number of particles physically measured on product die and also on probe yield from electrical test. An automatic statistical method must be used to ensure that no process tool is generating poor results, in term of particles and yield. We call this method the statistical dashboard [1].
Originally developed in 1995, the dashboard is a set of statistical tests that compares results from wafer fab equipment at each process stage. The comparison can be done on any response variable, such as parametric measurements, electrical tests, particles, and random yield. A key aspect of the statistical dashboard is its centralized database, which holds all the relevant information (i.e., process tool identification for each lot, yield data, and other test results). The dashboard operates as a complex statistical program, automatically generating a summarized output describing the yield performance level of a fab line tool at a specific stage in the process.
Initially, the statistical dashboard was based on analysis of variance (ANOVA), which is suited for test results with normal distribution of data on a Gauss curve, but a key enhancement was made in the late 1990s, adding nonparametric test to the dashboard. To detect most of the differences between process tools, the dashboard uses Kruskal-Wallis nonparametric test for yield and particle data, which typically do not have standard statistical distribution. Most recently, two new statistical algorithms have been developed to calculate random and systematic yield losses. Random yield losses are most likely caused by particles, while systematic yield failures are associated with process integration issues. Distinguishing between the two losses is an important new capability in the dashboard. Random yield is related to defectivity as it takes into account only yield losses randomly distributed on the wafer. Statistical analysis on random yield is more powerful since it takes into account only one source of variation: particles.
This article first discusses the statistical dashboard as it is applied to particles for prevention and quick detection of defectivity issues. Then we discuss the dashboard’s ability to identify random yield losses. To conclude, we present our defectivity and yield management system with correlations between these two components - the particle dashboard and random yield dashboard elements.
Particle dashboard
For input into the dashboard, particles are measured on product wafers using inspection tools at several process stages during semiconductor fabrication. The number of particles measured on each lot is stored in a database. Tracking information is also stored for each lot, including the process stage and process tool. It is then possible to extract all these data from the database and to perform a statistical test. If the results show that a process tool is generating more particles on a product die, the equipment is shut down and the problem is addressed. Figure 1 provides an example of test results from the particle dashboard, which identifies one tool as generating more particles on average than other process equipment in the same process step.
Figure 1. Tool A (red crosses) generates more particles on average than two other tools (represented by blue asterisks and black dots). |
null
Random yield dashboard
In semiconductor processes, yield has two main components: random and systematic. Systematic yield is related to process integration issues, while random yield is related to defectivity issues and takes into account only yield losses that are randomly distributed on the wafer. Two new algorithms [2] have been developed to calculate random and systematic yield when the size of dice becomes large and the risk from particles grows. One of the additions is a new image-processing algorithm, which is used to count the number of bad dice in the neighborhood of a bad die i. If the number of bad dice is higher than a given threshold, the system will issue a signal and place the bad die i in a cluster of bad dice. In these cases, the bad die i would be taken into account for systematic yield calculation. If the number of bad dice in the surrounding neighborhood is lower than the given threshold, then the bad die i is taken into account for random yield calculation (see Fig. 2).
Figure 2. Example of a wafer showing random and systematic yield issues. |
A new spatial correlation algorithm is similar to the image-processing algorithm, but an additional iterative statistical test is performed to set the threshold for determining if random or systemic yield calculations are applied. The spatial correlation algorithm starts with the hypothesis of fully random yield losses on a wafer. The threshold is then decreased to a level where some dice are included in the systematic yield loss category. Iterations are continued until the statistical test does not detect any systematic yield losses. The remaining dice not included in the systematic yield losses are then included in the random yield loss category for calculation.
Random and systematic yield in statistical analysis
Statistical tests are more powerful with random or systematic yield as a response variable. The ability to identify bad die as random yield loss reduces “noise” by narrowing down the potential causes because only defectivity is taken into account. Using the new algorithms, random yield is calculated on each wafer at probe and is stored in the dashboard’s database. This information, along with tracking information, is used to perform a statistical test on random yield. If there is a statistical difference, then the process tool that is giving lower random yield is flagged (Fig. 3).
Figure 3. Random yield dashboard shows Tool A (red crosses) with lower random yield results compared to other tools. |
null
Conclusion
Identifying random yield loss has increased the ability to address defectivity issues. Thus, overall manufacturing yields can be increased by up to several percentage points, based on early results from the dashboard with the two new algorithms. Particle and random yield dashboard results are now reviewed in weekly meetings and process tools are stopped based on their effect on the product. Extensive use of the two statistical dashboards has improved the process by strongly linking defectivity and yield.
References
- F. Bergeret, Y. Chandon, “Improving Yield in IC Manufacturing by Statistical Analysis of a Large Database,” Micro, 1999
- F. Bergeret, M. Gomez, C. Le Gall, “Wafer Yield Enhancement Using Random Yield and Systematic Yield,” Global Semiconductor, SPG Media Ltd, London, UK, 2004.
Francois Bergeret received his PhD in statistics from Paul Sabatier U. in Toulouse, France. He is responsible for defectivity and is a Six Sigma Black Belt in the Mos20 wafer fab of Freescale Semiconductor France, 134, avenue du général Eisenhower, BP 72329, 31023 Toulouse Cedex 1, France; ph 33/561-191205, e-mail [email protected].
Caroline Le Gall received her PhD in statistics from Paul Sabatier U. She is a statistician and Six Sigma Black Belt in Mos20.
Martine Gimbre received her license in engineering from Bordeaux U. and is a defectivity engineer in Mos20.