Issue



Lithography process control


11/01/1997







Lithography process control

Harry J. Levinson, Advanced Micro Devices, Sunnyvale, California

Since its invention in the 1920s, statistical process control (SPC) has been the most powerful tool available for controlling manufacturing processes. Unfortunately, the application of SPC to lithography is not always straightforward. This article discusses some of the difficulties in applying SPC to microlithography, and ways to get around some of these complications. Process control problems that arise during technology development as well as manufacturing are also discussed.

Manufacturing efficiency is required for success in the highly competitive microelectronics industry. Effective manufacturing, in turn, demands maximum utilization of expensive equipment and minimal rework and scrap. The manufacturing process must consistently produce product parameters within specifications: the process must be in control. Moreover, control must be achieved with the smallest possible expenditure of equipment time. Similar arguments concerning efficiency apply to development operations [1], where reduced cycle times have high economic value, and technology development is facilitated by processes that are under control.

Statistical methods are critically important, but are never sufficient. While SPC and other techniques help to determine the state of control, identifying sources of variation and taking corrective action require knowledge of the underlying process science.

Problems with the naive application of SPC

A number of years ago, an engineer responsible for resist processing came to me very discouraged. He had worked diligently to produce resist coatings with consistent thicknesses, and the measurements were indeed nearly always within the ?50 ? specification limit. Nevertheless, when he tried to apply SPC to the resist coatings, the operation frequently appeared to be out of control. After asking a few questions I realized that the engineer had applied SPC formalism in textbook fashion, which in his case was the wrong thing to do.

Click here to enlarge image

Figure 1. A control chart for an in-control resist coating operation, where the control limits have been calculated incorrectly.

The engineer coated a test wafer with resist daily, then measured the thickness of the resist at multiple sites across the wafer. He computed a range and mean for the resist thickness across the wafer from these data, and generated X-bar and range (R) charts after several days. Next, the engineer calculated the standard deviation (s) of the process from the average range (R) using the textbook formula:

Click here to enlarge image

where d2 is a constant found in most textbooks on statistics [2].

The engineer followed the procedure from the textbooks line by line and ended up with a control chart like the one shown in Fig. 1. While the process performed consistently, the calculations indicated that it was frequently out of control. To understand what went wrong, let`s explore the assumptions upon which SPC is based.

Statistical process control is a mathematical tool that allows production workers to distinguish normal from abnormal levels of variation. It rests upon the ability, justified by the central limit theorem, to transform any random process into one that varies according to a normal (or Gaussian) distribution [3]. This method collects the individual data from a randomly varying process into subgroups. According to the central limit theorem, the subgroup averages will be normally distributed, regardless of how the original data varied, for sufficiently large subgroups. Subgroup sizes of only two or three often produce sufficiently good approximations to normal distributions.

Normal distributions are particularly convenient because they are completely characterized by their means and standard deviations. For completely random distributions, there are a number of ways to estimate the standard deviation required to determine control limits. One way is to calculate the sample standard deviation directly from its definition:

Click here to enlarge image

where N is the number of measurements.

However, SPC originated in the 1920s [4], when digital computers with automatically uploaded data capabilities did not exist. Calculating the standard deviation according to Eqn. 2 was not a simple task without computers or calculators. Accordingly, SPC used other methods to estimate the population standard deviation. Such calculations involved only a small amount of data, at any given point in time, as in Eqn. 1. For completely random distributions, the two methods, direct calculation (Eqn. 2) and estimation from the range (Eqn. 1), are equivalent. Unfortunately, resist coatings, like many situations in microlithography, are nonrandom.

Click here to enlarge image

Resist thickness measurements taken from one wafer did not necessarily have the same distribution as measurements from another wafer measured on a different day, due to slowly shifting process variations. The mean thickness was likely to be different from day to day (see Table 1). Figure 1 charted the day-to-day variation. The standard deviation used for process control considered only the variation across single wafers and captured none of the variation in the means from day-to-day. Because the across-wafer variation was significantly less than the day-to-day variation, these control limits underestimated the day-to-day variation, and the process appeared to be out of control.

Nonrandom variation

The assumptions underlying SPC can be violated if the processes being monitored are nonrandom. For example, resist coat and develop processes occur in radially symmetric configurations, so systematic variations in resist thicknesses and linewidths often have radial functional dependence. Several issues must be considered when trying to control systematically varying processes.

Consider a quantity that varies quadratically, such as resist thickness as a function of radial distance from the center of a wafer. Suppose measurements are sampled uniformly, and the standard deviation is calculated according to Eqn. 2. Figure 2 shows the standard deviation as a function of the number of sampled points. The process appears more uniform as the sample size increases, which is not what occurs for completely random processes.

Click here to enlarge image

Figure 2. Nonrandomly varying quantities and their sample standard deviations, calculated as functions of the number of uniformly distributed samples: a) linearly varying parameter, b) sample standard deviation for a linearly varying parameter, c) quadratically varying parameter, and d) sample standard deviation for a quadratically varying parameter.

This problem has practical significance because minimizing measurements minimizes costs. The number of measurements is often reduced when a process is transferred from development to manufacturing, after a level of stability has been demonstrated. If the process has a systematic component, reducing the number of measurements will increase the apparent process variation. In reality, the process might be stable, but process engineers would probably be required to investigate the apparent sudden rise in variation. In a process with a linearly varying systematic component, a decrease from 13 to 5 measurements would increase the standard deviation by 20%. Many processes have spatially systematic contributions to the overall variation, so analysis should take this effect into account.

Linewidth control

The economic value of linewidth control was recently estimated to be $7.50/nm/chip for mainstream microprocessors, due to the relationship between gate length and processor speed [5]. Small gate lengths produce fast parts, but gates that are too short will break down. The economic advantage lies in having gates that are neither too short nor too long. In other words, the gate length needs to be controlled, and lithography is a principal control factor.

Lithographic formation of patterns is extremely complex, involving stepper optics and the resist photochemistry. Because of diffraction, the light intensity profile does not have a well-defined edge (Fig. 3). For a given set of optics, a decrease in feature size will further flatten the optical profile. Yet, when the resist processing is completed, the edge of the remaining resist must correspond to the desired point on this flattened profile. Most of the time, this is done the hard way, by controlling all parameters known to affect linewidth.

Click here to enlarge image

Figure 3. The problem of imaging in optical lithography. The light intensity distribution (cross section) is shown at left for a long slit on a chrome/glass mask, imaged by a lens onto a wafer. The light does not have an ideal distribution (right) with a well-defined edge, but is smoothed out because of diffraction

Before discussing the parametric control of linewidth, a brief explanation of why the hard route has been chosen is in order. Real time measurement of the linewidth during processing is very difficult. The small linewidths of modern microelectronics require scanning electron microscopes, which cannot be used for in situ measurements during resist development. Recently introduced techniques using scatterometry to measure linewidths in situ are still in the laboratory [6]. Detection of develop endpoint improved wafer-to-wafer resist uniformity in i-line resists, but the properties of DUV resists make this technique difficult, if not impossible, to apply. Perhaps future generations of DUV photoresists will be more compatible with real-time process control. Until then, leading edge lithography relies on parametric control of linewidths - the hard way.

The final size of resist features depends on a number of optical and resist parameters (Fig. 4). DUV resists are affected by more variables than i-line resists, and are often more sensitive to particular parameters, such as post-exposure bake temperature. Moreover, chemical amplification makes DUV resists sensitive to part-per-billion levels of amines in the air, a problem that i-line resists do not share. The level of control required for any particular parameter depends upon the sensitivity of the resist to that factor. Apex, a first generation positive DUV resist, has a 20 nm/?C sensitivity to post-exposure bake temperature [7], while UV5, a DUV resist introduced recently by Shipley, has only a 3.3 nm/?C sensitivity [8].

Patterns in resist begin with optical images, so the optical profiles must be produced consistently within exposure fields, across wafers, and over time. Some optical variation results from inherent lens aberrations and cannot be improved by the process engineer. Other variables, such as focus, can be minimized by good control methodologies.

Click here to enlarge image

Figure 4. The causes of linewidth variation.

In production facilities, and particularly in development pilot lines, variations in the substrate reflectivity can modulate the linewidths. The lithography engineer must separate reflectivity changes from variations in the lithography process itself. Monitor wafers with consistent optical properties, like bare silicon wafers, are useful. Figure 5 tracks one such linewidth control monitor, which was used to characterize variations in the early days of DUV lithography. This monitor was employed on a development pilot line with highly variable "product" substrates. The linewidth monitor helped to distinguish lithography-specific changes, such as changes in resist photospeed, from changes induced by the substrates.

Click here to enlarge image

Figure 5. Linewidth control trend chart used to monitor a DUV pilot line.

Overlay

Individual components of overlay are obtained by calculating the parameters of appropriate models [9]. Some elements of overlay are common to every exposure field. For example, if the magnification of the stepper lens is not set quite right, there will be a magnification error that is nearly the same in every exposure field on any given wafer. Errors, like magnification, which affect overlay within an exposure field, are termed intrafield errors. Interfield (or grid) errors result from incorrect placement of the fields across the wafer. Considering both intrafield and interfield errors, a step-and-scan system has 10 independent overlay parameters (see Table 2 on next page).

The various parameters in Table 2 can be determined by fitting a set of overlay measurements to appropriate mathematical models. For step-and-scan systems, these models are completely linear. Step-and-repeat system models are similar, except for the absence of anisotropic magnification and intrafield skew.

Since wafer steppers were first used for lithography, overlay control has nearly always focused on the quantities listed in Table 2. Residual errors not captured by the model were always much smaller in comparison. When different steppers are used for successive masking layers, however, residual overlay variation arising from mismatch between the steppers can be substantial. Intrafield errors between different steppers result when aberrations in lenses displace features relative to each other.

As efforts to reduce the overlay accounted for by the model parameters have been successful, these errors have decreased faster than lens residual errors. Consequently, residual errors now consume an appreciable fraction of the overlay budget [10]

As with linewidth, process and equipment control are facilitated by highly reproducible "golden wafers" [11]. Alignment has been especially prone to process sensitivities, and such controlled substrates permit classification of overlay errors into process-dependent and process-independent categories. In the second category, a loss of stage precision will affect overlay regardless of substrate. Chemical mechanical polishing, on the other hand, requires interaction between wafers and steppers, and resulting errors are often limited to specific process layers. With a database of overlay on "golden wafers," the lithography engineer can determine whether problems on product wafers result from stepper-specific problems or whether there is a process contribution.

Click here to enlarge image

Reticles can be another source of overlay error. Because of the physics of beam writers, a large fraction of the registration errors on reticles are described by the same linear model that applies to step-and-scan intrafield overlay errors. Correction of these errors depends on the type of steppers used. For example, in some steppers, overlay offsets must be determined empirically on a product-by-product or layer-by-layer basis. Compensation for reticle registration errors will depend on the extent to which they are represented in the overlay measurements. A survey of reticles has shown that four-point measurements capture about 25% of the systematic reticle registration error. (Fig. 6) On the other hand, registration data taken from many points within a field allows for a much more substantial compensation. Exploiting registration data from many points is only pratical on steppers that employ machine corrections which are not determined from measurements on product wafers because overlay can be measured at only a few locations on product wafers.

Click here to enlarge image

Figure 6. Reticle registration errors, raw data, and residuals following correction for linear errors.

Click here to enlarge image

Figure 7. Intrafield registration errors for a step-and-scan system with a reticle stage in need of adjustment. The substrates were produced on a good stepper. Large variations in the overlay vectors across the scan (left to right) are indications of a reticle stage scanning problem.

These examples illustrate an extremely important fact. Overlay control methodologies are not always universally applicable; they relate to the specific characteristics of the wafer aligners that are used. In correcting reticle errors, for instance, step-and-repeat systems are quite different from step-and-scan machines. There are only two intrafield degrees of freedom in step-and-repeat, compared to four for step-and-scan, so repeaters have less ability to compensate for reticle errors than scanners do.

Step-and-scan errors

The re-introduction of scanning technology to lithography has also introduced new sources of process errors. Intrafield distortion errors are very stable in step-and-repeat systems, often not changing measurably over years of use. The intrafield characteristics of step-and-scan systems are produced dynamically and are therefore far more susceptible to change. Figure 7 shows intrafield overlay vectors produced by a step-and-scan system in need of adjustment. The large overlay vectors in the left side of the field resulted from scanning errors. Once corrected, the vectors varied little across the scan. Nonproduct monitor wafers are particularly useful for identifying such problems, because large numbers of overlay measurement sites can be placed within single exposure fields.

Summary

Process control in lithography covers a broad range of issues, all of which must be addressed to achieve the desired result. Science, optics, and photochemistry must merge with statistical methods and equipment monitors to achieve control in manufacturing or pilot line operations. As technology progresses, the issues that must be addressed to achieve control become more complex. Effects that once were considered unimportant are now challenges for today`s lithography engineers.

References

1. H.J. Levinson, J. Ben Jacob, "Managing Quality Improvement on a Development Pilot Line," Quality Management Journal, Vol. 3(2), pp. 16-35, 1996.

2. See, for example, A. J. Duncan, Quality Control and Industrial Statistics, 5th Ed., Irwin, Homewood, IL, 1986.

3. M. H. DeGroot, Probablity and Statistics, Addison Wesley, Reading, MA, 1975.

4. W. Shewhart, Economic Control of Quality of Manufactured Product, D. Van Nostrand Co., New York, 1931.

5. John Sturtevant, SPIE Conference on Optical Lithography, 1997.

6. J. Sturtevant, S. Holmes, T. van Kessel, M. Miller, D. Mellichamp, "Use of Scatterometric Latent Image Detector in Closed Loop Control of Linewidth," SPIE Vol. 2196, pp. 352-359, 1994.

7. G. Amblard, Y. Trouiller, D. Boutin, P. Moschini,S. AndrE, "Optimization of a Positive Tone Deep UV Process of Industrial 0.35-}mum Technology Production," Olin Interface `96, pp. 81-98, 1996.

8. Shipley, private communication.

9. H.J. Levinson,W.H. Arnold, "Optical Lithography," in Microlithography, Micromachining, and Microfabrication, ed. J. Rai-Choudhury, SPIE Press, 1997.

10. H. J. Levinson, M. Preil and P. Lord, SPIE, 1997.

11. Martin van den Brink, et. al., "Matching of Multiple Wafer Steppers Using a Stable Standard and a Matching Simulator," SPIE Vol. 1087, pp. 218-232, 1989.

Harry J. Levinson manages the Lithography Tools department at Advaced Micro Devices. He has been involved with process control in semiconductor processing for over a decade. He has taught classes on lithography science and process control under the sponsorship of Semi, SPIE, and UC, Irvine. Advanced Micro Devices, One AMD Place, MS 78, Sunnyvale, CA 94088; ph 408/749-2558, e-mail [email protected].