Chin-Chou Kevin Huang, David Tien, KLA-Tencor Corp., San Jose, CA USA
More challenging overlay requirements are driving a trend to use high-order control knobs for production set-up of scanners. This article explores the overlay requirements that are driving this trend, the challenges involved with implementing in production, and the cost-of-ownership trade-offs between different high-order control strategies.
Historically, lithographers have achieved layer-to-layer alignment control by assuming that overlay varies linearly across the wafer and across the scanner field. Although more sophisticated “high-order” control knobs have been available for use during scanner setup and optimization, they were not typically used in production. It has only been in the recent past, driven by the aggressive shrink in overlay budgets and the added complexities brought forth by immersion lithography, that these high-order degrees of freedom have finally been adopted into the production control loop [1-3]. In this article, we will explore the overlay requirements that are driving this trend, the challenges involved with implementing in production, and the cost of ownership trade-offs between different high-order control strategies.
Meeting the budget
Overlay control is a battle between overlay budget and control capability. Traditional overlay control separates the overlay data into two major components: systematic linear correctables and non-correctable residuals. Previously, the prevailing use of linear correctables was driven on the one hand by the lack of accessibility to high-order capabilities, and on the other by the relatively large overlay budgets that did not require the tighter control capabilities of high-order corrections. In Figure 1, the systematic linear correctables are illustrated in the green background, and the many other remaining contributors, as shown, are defined as uncorrected overlay residual contributors.
Figure 1. Overlay error contribution sources can be separated into three distinct categories: 1) Traditional linear overlay errors (green); 2) High order overlay errors (yellow); and 3) Non-systematic random overlay errors (gray).
Single machine overlay in current generation immersion systems is around 5nm, assuming process related contributing factors are removed. A critical layer in a typical 40nm process node usually requires overlay to be controlled within 10nm. If this overlay requirement is compared against the scanner’s capability, then it appears that the remaining tolerance or margin to accommodate typical fluctuations in overlay is quite small. Moreover, when the added complexity of double patterning lithography for the 32nm process node is folded into this problem, the overlay budget requirement shrinks well below 5nm. Clearly, minimizing overlay residuals, especially for these critical layers, is basic to maintaining low rework and high yield at the lowest cost/die.
The overlay vector plot of Figure 2 shows the mis-registration vector plot for a contact layer exposed by a single immersion scanner. Overlay data was measured using a KLA-Tencor Archer 100 with 34 mAIM targets spread across the field.
Figure 2. Overlay mis-registration wafer plot before corrections: This vector plot of overlay mis-registration errors shows overlay errors as high as 26nm based on a |mean| + 3σ calculation.
Polynomial equations are used to model the overlay vector plot in the spatial x/y domain. The equations below are examples of 3rd order polynomials used to model x and y overlay vectors in each spatial domain.
Traditionally, due to the relatively large overlay control margin at 65nm and above process nodes, overlay control knobs have been limited to the linear terms as defined by k1 to k6 of the previous polynomial equation. By using this linear model to correct the raw overlay data shown in Figure 3, the overlay correction results (referred to as overlay residuals) cannot be reduced below 10nm, which would not meet the required overlay control limits.
Figure 3. After linear corrections the overlay mis-registration wafer plot shows overlay errors to be as high as 12nm (|mean| + 3σ), which is beyond the 10nm overlay requirements for this layer.
In contrast, the modeled data shows a significantly better fit to the raw data by the full using 3rd order polynomials above for both grid and field terms as shown in Figure 4. As a result, the corrections are much better and now meet the control requirement.
Figure 4. After high order corrections the overlay mis-registration wafer plot can be reduced to 8nm (|mean| + 3σ), which now meets the 10nm overlay requirements for this layer. This represents over 30% improvement in overlay control for this layer by using high order modeling versus traditional linear modeling methods.
In addition to the selection of modeling terms, there are other factors that also need to be considered when moving to high order control. In the following, further studies will be discussed for the implementation of high-order control. These factors include source of variance analysis, sample planning, and correction efficiency.
Overlay source of variance error decomposition
One problem that occurs in the production line is that high rework rates can be attributed to repeated attempts to adjust the overlay using linear corrections, which just cannot bring the overlay within the required control limits. One way to tackle this problem of poor correction results is to understand their source and root cause. Based on a typical current generation exposure system, the residual contributors can be broken down into five components.
The first component is the wafer (or grid) level higher order (2nd + 3rd order) component. Adjustment knobs for grid components are commonly available on latest-generation scanners. The 2nd component is the wafer level higher order component (4th ~ 6th order). Adjustment knobs for this component are generally less commonly available for production usage, but some scanner vendors provide field-by-field correction functionality, which is a way to combine the 2nd to 6th order corrections together without access to all the knobs needed for individual field term corrections.
The 3rd component is the field level high order (2nd + 3rd order) component and is a component that is available with certain scanner vendors. The 4th component is the field-level higher order (4th ~ 6th order) component and is normally considered as a field level fingerprint, which usually cannot be corrected. The 5th and final component is the unmodeled component. This component can be treated as a random component that combines the exposure tool and metrology random errors. The main contributors for the unmodeled components are scanners mechanical capability, metrology tool measurement uncertainty, and overlay mark robustness. The equation below describes the decomposition of linear residuals in statistical variance domain.
By using the above equation and ANOVA , the linear residuals for the contact layer shown in Fig. 3 can be broken down into five different components of variance. Figure 5 shows these individual components in a stacking variance bar chart in order to help highlight the source and impact of contribution of each component towards achieving the final overlay control objectives.
Figure 5. Source of variance in X and Y direction: The overlay variance (i.e. residuals squared) is broken down to show the source of contribution for the error in each axis using the variance domain.
The 9×4 (9 fields/wafer × 4 sites/field) sampling strategy is commonly observed in the production line. If this 9×4 sample plan is used, then the overlay can be effectively corrected at the sample sites. However, the overlay results at the unsampled locations might not be nearly as effectively corrected.
As this example shows, the sampling plan (along with the model used for correction and feedback) is a critical part of the solution when determining the overall best control strategy using the high-order modeling. In a production environment, it becomes critical to find an optimal sample plan to represent the entire overlay population with the least amount of sampling. Basically, overlay control parameters k1 to k20, as shown by the 3rd order polynomial equations above, are correctables representing the overlay population and can be solved by a least squares equation as shown in the following matrix.
and where x and y are the coordinates of the field center for the grid correctables or the target locations relative to the field center for field parameters; and OVLm is the measured overlay value at each location.
By comparing the singular values of the H matrix (the geometrical based matrix), between the full sample and selected sample plans, one can determine the optimal sample plan and minimize the sampling effect. This same methodology can be used to select the optimal field sample plan for users to determine linear or high order modeling needs.
It is important to mention that both grid and field corrections are needed to achieve the full potential of any high-order control strategy. As shown in Figure 6, the four-corner sample plan does not adequately represent the field behavior. However, by adding an additional seven measurement points in the field, the corrections for the new eleven-point plan enables significant residual improvement.
Figure 6. Overlay mis-registration field plot after corrections using either 4 corner sampling or 11 points in-field sampling: The vector plot on the left side shows that the 4 corner (circled) sampling plan does not correct the actual errors as shown by the relatively large error vectors still remaining. In contrast, the 11 point vector plot (right side) using the in-field sample plan shows much smaller error vectors across the entire field after corrections and provides more than 35% improvement in residual reduction in this particular case.
Figure 7 shows the relationship between sample points, data modeling, and correction efficiency based on the raw data set shown in Fig. 2. Without proper modeling, there is no guarantee that the best overlay control will be obtained, even if the entire population of 4760 points is measured. High-order modeling and corrections can improve the overlay control up to 25% in this case study. Interestingly, it is not necessary to measure the full sample of thousands of overlay points to take advantage of high-order modeling. Instead, most if not all of the improvement in overlay can be gained with a properly selected sample plan determined through the optimization process.
Figure 7. Correction Improvement vs. Sample Plan: The amount of improvement in the overlay error is a function of both the sample plan and the data modeling technique. This graph shows that an optimized 24×11 sample plan using high order grid and field corrections can achieve nearly all of the improvement of a full sampling plan with significantly reduced sampling overhead. Overall, this represents a 36% overlay improvement compared with the 9×4 sample plan or 20% improvement compared with baseline sample plan.
As lithography continues to advance beyond 40nm, overlay control will also need to continue to move forward with new and more sophisticated control strategies. High-order correction has been demonstrated as an effective approach in meeting current 40nm overlay control requirements. The source of variance analysis technique also provides a fast diagnostic methodology to troubleshoot and de-convolute complicated overlay control issues that could otherwise consume much more time and resources. Sample planning is another important factor that needs more consideration in the move to high-order control and can have significant repercussions to not only productivity, but also yield.
Finally, an effective high-order control strategy requires both grid and field control. Although high-order grid control has become widely accessible to users with the latest generation of scanners, high-order field control still requires more collaboration between the user and vendor. Specifically, on the scanner side, adjustment knobs for field control need to become more easily accessible. Moreover, on the metrology side, high-performance small targets that can be placed in-field to capture these in-field errors also need to be readily available in order to enable a production-worthy control solution. Looking ahead, we are optimistic that these developments will continue to drive even more improved and effective high-order control strategies that can be readily integrated into production with compelling value creation for end users.
1. A. Sukegawa, S. Wakamoto, S. Nakajima, M. Kawakubo, N. Magome, “Overlay Improvement by Using New Framework of Grid Compensation for Matching,” Proc. SPIE Vol. 6152, 61523A, (2006).
2. T. Kono, M. Takakuwa, K. Asanuma, N. Komine, T. Higashiki, “Mix-and-Match Overlay Method by Compensating Dynamic Scan Distortion Error,” Proc. SPIE 5378, 221, (2004).
3. D. Choi, A. Jahnke, K. Schumacher, M. Hoepfl, “Overlay Improvement by Non-linear Error Correction and Non-linear Error Control by APC,” Proc. SPIE 6152, 61523W, (2006).
4. X. Chen, M. E. Preil, M. Le Goff-Dussable, M. Maenhoudt, “An Automated Method for Overlay Sample Plan Optimization Based on Spatial Variation Modeling,” Proc. SPIE 4344, p257, (2001).
Chin-Chou Kevin Huang received his PhD in mechanical engineering from the U. of Florida at Gainesville. He is a principal engineer at KLA-Tencor, 160 Rio Robles, San Jose, CA 95134 USA; ph.: 408-875-1085; email [email protected].
David Tien received his BS chemical engineering from U.C. Berkeley and is a marketing director at KLA-Tencor.