Issue



Parametric control for wafer fabrication:New CIM techniques for data analysis


09/01/1997







Parametric control for wafer fabrication: New CIM techniques for data analysis

Robert C. Farrier, Honeywell, MICRO SWITCH Division, Richardson, Texas

CIM systems designed for VLSI wafer fabs often provide only limited capability for monitoring parametric variations.

In particular, most systems do not have an adequate capacity for storing and analyzing the large amount of data collected during wafer probe and package test. This article describes some techniques for addressing this problem, so that test data can be used more effectively for process optimization.

Most computer integrated manufacturing (CIM) systems have been optimized to monitor point defect failures in wafer fabs producing digital VLSI circuits. Unfortunately, many of these systems do not have adequate capabilities for parametric control. This can be a severe problem for wafer fabs producing analog ICs, as well as for fabs producing mixed-mode ICs that combine both analog and digital circuitry on-chip.

Faced with this problem, engineers at the Sensor FAB of Honeywell`s MICRO SWITCH Division undertook an in-house development effort to enhance existing CIM packages with additional data collection and analysis capabilities. These additional capabilities now provide critical support for engineering efforts to improve product yield and performance.

Point defect vs. parametric failures

In VLSI circuits, the most common yield limiting mechanism is a point defect. This type of defect will usually cause some type of catastrophic failure. Product yield for a given die size will primarily be a function of average defect densities in the fab (which in turn depend on particle counts, handling problems, etc.). Most yield improvement efforts - and therefore most CIM system capabilities - are aimed at monitoring and reducing point defect density levels.

Point defects are not the only IC failure mechanism, however. Failures can also result from excessive parametric variation in the process, especially when the design includes sensitive analog circuitry. In analog circuit designs, transistors operate mostly in the active region, where performance depends on a variety of process parameters. By comparison, transistors in digital circuits will normally be either turned off or operated in the saturation mode and will typically be less sensitive to process variation.

Process gradients

Although less critical for digital circuits, excessive process gradients from lot-to-lot, wafer-to-wafer, and across-the-wafer can be the primary yield limiter for analog circuits. It is important to monitor and understand the process gradient signatures inherent in the different process steps within the wafer fab. The production test data collected during wafer probe and package test is a critical source of information for engineers seeking to understand and reduce these process gradients (Fig. 1).

Click here to enlarge image

Figure 1. Wafer maps of measured die performance can help engineers determine cause-effect relationships by identifying characteristic process gradients. When the number of possible die on a wafer is large, the resolution of die level wafer maps can be very good, as shown here. This diagram plots a measured parameter for an analog IC for each of 3484 die on a 4-in.-dia. wafer. The storage requirement for the single wafer shown is about 250,000 bytes (for 27 different measurements made on each die).

When a wafer map indicates that die performance varies in a pattern across the surface of the wafer, an attempt should be made to identify which particular process parameter is causing the variation. Initially we thought that the pattern in Fig. 2 (see p. 102) might have been created during application of the photoresist/developer. This hypothesis could not explain, however, the wafer-to-wafer (and even lot-to-lot) repeatability of the pattern. It was then suggested that the pattern might be due to subtle variations in the mask that were being imprinted onto each wafer. This theory was later disproved when different mask sets produced the same distinct pattern.

After considerable study, there was another proposal. The pattern might be produced by slight (but repeatable) variations in the thickness of the photoresist that occur as the chemical is "spun on" over the device-dependent topology of the wafer. Some additional experimentation verified that this was indeed the correct interpretation.

Once the particular process step causing the variance was identified, DOEs could be used to determine how to reduce the variance.

This analysis of process gradients illustrates how die measurement data is being used to identify and reduce the process variation. The wafer maps were produced from production test data that had been routinely stored over several years. Had this data not been available, this analysis would probably never have been completed (or it would have taken a great deal more time and effort than it did).

As shown in "CIM enhancements for data analysis," the production test data is stored in the common data files (CDFs) that are created during wafer probe. The stacked wafer maps are also produced from these files.

Wafer trend analysis

In the preceding example, most wafers contained the windmill pattern being analyzed, so the selection of a particular wafer from among the thousands available was not critical. In other cases, though, a process gradient may occur infrequently, and it will be necessary to view "the many" in order to select "the few." This capability is facilitated by storing the statistical data from the CDFs in a SQL database, where it can be easily "filtered" and accessed for display.

Figure 3a displays the statistical data from 1135 different wafers for a particular test measurement. Plotting the data in this way shows product performance to specification limits (i.e., Cpk) for very large populations. In this case, the lower specification limit could be "pulled in" by about 8 mV (green triangle) without significantly reducing product yield.

A user might select a particular wafer of interest to drill down to the detailed wafer map represented by that point (Figure 3b). Software that uses the SQL database as an index to identify the file name and CD-ROM volume for the corresponding CDF can provide this capability. Opening this CDF then displays the actual wafer map.

A time series plot using box and whisker statistical symbols is another useful way to view quantities of wafer test data. The data for these symbols is included in the SQL database, and each wafer map shows this statistical information next to the color bar for the population histogram. A time series trend plot can be produced by using this data to represent wafer performance chronologically.

Click here to enlarge image

Figure 2. A measurement of the switching threshold of a Hall effect sensor, a critical performance criterion. The upper plot shows the switching point for each die on a typical wafer from a production lot of 48 wafers. The "faint" windmill pattern shows the variations in the Hall offset across the wafer. The lower plot was created by averaging all of the data (by die location) for each of the 48 wafers in the lot. The pattern is much better defined, indicating that the windmill pattern is extremely repeatable for each wafer in the lot. In fact, other lots showed the same pattern.

Package test data

In addition to the test measurement data taken during wafer probe, data generated during the final test of packaged parts are also saved for analysis. Package test data are stored in the same manner as the wafer test data, using CDFs for the raw measurement data and the SQL database for the statistics, as shown in Fig. 4.

If the parts have been randomized prior to testing, the map of test values should likewise be random. If, however, the map shows nonrandom variations over the duration of the test, then either the performance of the tester itself was somehow changing or the parts were not randomized.

Tester "drift" can be a valid concern, given the very high precision required for some analog measurements, and the map can be a useful monitoring tool. Testers can also develop continuity problems that might cause good parts to "fail." The appearance of "streaks" in the pattern might indicate this type of problem.

Click here to enlarge image

Figure 3. a) Each point represents the mean and sigma for one wafer (for the test "VREG8V"). Any wafer with a mean and sigma located on the red (outer) triangular "box" had a Cpk equal to one, while any wafer inside the box had a Cpk greater than one (a population with a Cpk greater than one has both the upper and lower "3-s" limits of the distribution located within the specification limits). The green (inner) triangle represents the minimum Cpk box that could be drawn for the given population. The "peak value" of this triangle corresponds to the calculated mean and sigma for the combined die population from all of the wafers (about 3 million die); b) map for the individual wafer represented by the circled dot in a).

Statistical anomalies

The underlying causes of performance variation are usually eitherdiscrete or continuous. An example of a discrete phenomenon would be a high resistance wire bond that could cause a device to demonstrate anomalous performance. An anomalous device, loosely defined as one with measured performance outside the 3-s limits of the "main population," might or might not perform within specification limits.

Anomalies within a set of data present a problem when calculating mean and sigma values. Their values may be extreme, causing their influence to be disproportionately large. To address this problem, mean and sigma calculations should usually exclude anomalous values. One way to do this is to use box and whisker statistical values as filter limits. Another way is to use a method of successive approximation for calculating mean and sigma: a preliminary calculation of mean and sigma is made using the entire population. The 3-s limits based on these values are then used to filter out extreme values for a second calculation for mean and sigma. This process is repeated until convergence occurs.

Anomalous values excluded from mean and sigma calculations are recorded as a percentage above (anomalies high) or below (anomalies low) the 3-s limits from which they were excluded. The mean/sigma values will usually be a more reliable indicator of continuous shifts in the process, while the anomaly high/low count values can be used to monitor discrete phenomena.

Data correlation

Storing both package test and wafer probe data in a common database allows easier observation of measurement correlations. This capability is important, for example, when IC performance with temperature can only be verified at the package level, or when packaging effects can cause variations in performance. Both of these situations occur frequently with IC sensors.

When correlating data from wafer probe and from package test, packaged parts must be grouped by wafer lot or by wafer number. The results can easily justify this inconvenience. At Honeywell, this ability to correlate wafer and package test data has resulted in significant cost savings for existing products. The improved understanding of process dependencies is also critical for developing new products. Data from in-line process measurements and on-wafer process monitors is also included for additional correlation analysis.

Future systems requirements

The system described could not have been developed without advances in computer technology over the last few years. New advances will permit even more system enhancements. For example, web server technology will enhance data accessibility, while CD-ROMs will be replaced with the new DVD format (providing a 16? increase in per-disk storage). Increased data processing speed will facilitate such advanced techniques as pattern recognition and multivariate analysis.

The motivation for incorporating this new technology into future CIM systems stems from the continuing pressure to improve product performance while reducing product cost. These product enhancements must be driven in part by the detailed analysis of data collected during product test.

Conclusion

This article has described some techniques for using production test data to help monitor and control parametric variations in IC fabrication processes. Many of the techniques are beyond the capabilities of off-the-shelf CIM systems; some details were provided to show how these capabilities could be developed. The ideas and examples will hopefully demonstrate both the need and means for better use of production test data to improve product yield and performance.n

ROBERT C. FARRIER received his BS degree in physics from Texas A&M University in 1965. He has been an engineer with Honeywell since 1986, and has been involved with the development of the system described in this article. Honeywell Inc., MICRO SWITCH Optoelectronic Products, 830 E. Arapaho Rd., Richardson, TX 75081; ph 972-470-4246, fax 972/470-4278,e-mail [email protected].

Click here to enlarge image

Figure 4. Test data for 1271 parts from a single CDF. In place of the wafer map, a serpentine pattern shows the chronological order in which the parts were tested.

A typical CIM system generates only about one Mbyte of data for each wafer lot processed [1]. The expanded CIM system described here, however, generates another 10 Mbytes of data/lot probed, plus 5 Mbytes during package test. All of this data is saved indefinitely, and any data less than two years old is kept on-line.

In addition to satisfying data storage requirements, data access delays must be minimized. With the system described, wafer maps and other plots of the on-line data can be created in just seconds.

Input data files. While it was not possible to standardize on a single type of tester for all wafer probe and package test requirements, it was possible to define a standard Input Data File format. The structure of this file supports all device types.

Statistics server. A three-tier client-server architecture ensures that all statistical calculations are done properly and that this process does not slow down production testing. As shown in the diagram, a common Statistics Server performs all of the statistical calculations for each set of test measurement data. Both the raw data and the corresponding statistics are then saved as a binary file. Typically one file is created for each wafer probed.

CD-ROM data archive. Combining the raw data with statistical calculations produces the common data files (CDFs). A data archive then stores these files indefinitely. The structure of these files was optimized to be as compact as possible, while minimizing data access time.

SQL database. A relational database that is accessible using standard SQL (Structured Query Language) stores the statistical data for each wafer (and corresponding group of packaged parts). This statistical data requires about one Mbyte/wafer lot. (The individual die measurement data are excluded from the relational database because data access - e.g., for wafer maps - would be too slow.)

Data analysis tools. We use a combination of off-the-shelf and custom software tools to view the test data. For example, wafer maps are generated with software that was developed in-house using Visual Basic, while other types of data analysis use off-the-shelf tools to access and display the data in the SQL database directly.

Reference

1. R. Johnson, "Design rules for fab CIM," Solid State Technology, September 1996.