Issue



Accuracy, speed, new physical phenomena: The future of litho simulation


02/01/2006







Chris A. Mack, Consultant, Austin, Texas

Optical lithography modeling began in the early 1970s and represented the first serious attempt to describe lithography not as an art, but as a science [1-3]. Thirty years later, optical lithography continues to make dramatic advances that enable the profitable continuation of Moore’s Law. During this time, lithography simulation has served two essential purposes: to validate and improve the industry’s theoretical understanding of lithography and to provide a tool to the average lithographer to apply this theory to real lithography problems. By both of these measures, the industry’s efforts in lithography simulation have been extremely successful. It is not an overstatement to say that semiconductor manufacturing as we know it would not be possible without lithography simulation.

But just as lithography technology moves at a harried pace toward finer features, simulation technology must move just as fast to stay with, or ahead of, the needs of lithography technology development. The future of simulation will be even more challenging than the past has been. There are three major areas of lithography simulation development for the foreseeable future: improved accuracy, improved simulation speed, and the addition of new physical phenomena.

Accuracy

Lithographers want simulation to be more accurate than their experimental data. While this goal may seem unattainable, it is in fact quite possible. Borrowing the ideas and terminology of metrology tools, the goal is to keep the total simulation uncertainty (TSU) below the total measurement uncertainty (TMU) of a critical dimension measurement. For example, a requirement for a lithography simulator might be a TSU <2nm at the 65nm node. Much, probably most, of the focus of model and simulator development today is on achieving greater accuracy.

Like metrology data, simulation uncertainty can be broken down into two components: precision and accuracy. Precision is the result of numerical errors in solving the model equations. Like a metrology tool, the precision-to-tolerance ratio (P/T) of a simulator should be <0.2 (and ideally, should be as low as 0.1). This means that simulator precision should be <±2% of the target critical dimension (CD). Of course, simulation precision is not a random error, so precision specifications for a simulator are based on maximum numerical errors over the widest possible range of input parameters.

Accuracy is a consequence of the goodness of the models themselves, and how well those models have been calibrated to a given process. In general, calibration is the primary limiter of accuracy for the high-accuracy physically based simulators of today. In order to meet the seemingly unrealistic goal of simulating features with greater accuracy than can be measured, careful measurement of all input parameters is needed. In addition, comparison of simulation results to metrology results (where a fraction of a nanometer is significant) is not possible without a careful understanding of the physics and algorithms of the metrology tool.

Speed

Simulator speed is important whenever the usefulness of the results is limited by how long one must wait to get the answers. While improvements in simulator speed are always welcome, dramatic improvements can in fact enable dramatic new applications. For example, what if high-accuracy simulations, currently available over simulation areas of just tens of square microns, could be run for a full chip?

Today, full-chip lithographic simulation using approximate image models and empirical resist models has enabled useful optical proximity correction (OPC) technology. These models, however, show reduced accuracy (3-10× less accuracy) compared to full physical models over a much narrower range of process conditions; it’s the price paid to get sufficient speed to make a full chip simulation practical. There is, then, a need for even faster simulations with more physically based models that provide much better accuracy over a wider range of process conditions for a single calibration.

The present approach to OPC verification has evolved from a number of separate inspection strategies. OPC decoration is verified by a design rule or optical rule checker; the reticle is verified by a reticle inspection system; and the final wafers are verified by wafer inspection and metrology tools. Each verification step looks at a different representation of the desired device pattern with little or no data flowing between them.

Although each component is a valuable part of the whole, the desired outcome is to find design for manufacturing (DFM) issues as early as possible, as this results in the largest savings of resources and money. For example, a defect found before the reticle is made might be corrected in a few days time at a cost of thousands of dollars while a defect found during wafer manufacture could easily cost 10-100× as much with a similar increase in lost time. Lithography simulation has great potential to connect the data between each of these verification steps and enable full-chip verification across the entire lithography process window at an early stage - optimizing the design for the actual manufacturing process.

To accomplish full-chip verification, a simulation system would need to model how the design will be transferred to the reticle layer and how that reticle will be imaged into resist across the full focus-exposure process window. Simulated images could be compared to the desired pattern, and defect-detection algorithms could then be applied to determine if any unacceptable variations in the pattern occur within the nominal process window. Recently, a simulation system designed to address such a need was described [4]. By combining vector-image calculations with a new resist model and specialized supercomputing hardware, a full chip (8mm square on the wafer) can be simulated through 35 focus and exposure points in two hours. The 3σ matching of simulator results to experimental CDs through size and pitch over a range of focus and exposure corresponding to the full process window was found to be better than ±5nm.

New phenomena

Interesting efforts at several universities (the U. of Texas at Austin being predominant) involve so-called mesoscale modeling, working down closer to the molecular level for physical descriptions [5]. As illustrated in the figure, the simulation domain is broken down into several mesoscale (medium-sized) cells, ~0.7nm square, each containing an important subcomponent of the resist (an acid molecule, a region of free volume, a blocked polymer site, etc.). Using Monte Carlo techniques, movements and reactions of the components are followed through time to predict lithographic results. While much work remains to be done, mesoscale modeling efforts could one day aid in resist design and provide valuable insight into the mechanisms of line-edge roughness formation and statistical feature size fluctuations.


Example of breaking up a simulation domain into mesoscale “cells.”
Click here to enlarge image

The remarkable successes of lithography simulation reflect the broader successes of lithography technology development and semiconductor manufacturing advancement. And just as the continued improvements in resolution and manufacturability in our industry seem inevitable, so too will lithography simulation development continue and advance.

Acknowledgment

At the time this article was developed, Chris Mack was VP of lithography at KLA-Tencor.

References

  1. F.H. Dill, A.R. Neureuther, J.A. Tuttle, E.J. Walker “Modeling Projection Printing of Positive Photoresists,” IEEE Trans. Electron Devices, ED-22, No. 7, pp. 456-464, 1975.
  2. W.G. Oldham, S.N. Nandgaonkar, A.R. Neureuther, M. O’Toole, “A General Simulator for VLSI Lithography and Etching Processes: Part I - Application to Projection Lithography,” IEEE Trans. Electron Devices, ED-26, No. 4, pp. 717-722, April 1979.
  3. C.A. Mack, “PROLITH: A Comprehensive Optical Lithography Model,” Optical Microlithography IV, Proc. SPIE, Vol. 538, pp. 207-220, 1985.
  4. William Howard, Jaione Tirapu Azpiroz, Yalin Xiong, Chris Mack, Gaurav Verma, et al., “Inspection of Integrated Circuit Databases through Reticle and Wafer Simulation: An Integrated Approach to Design for Manufacturing (DFM),” Design and Process Interaction III, Proc. SPIE, Vol. 5756, 2005.
  5. Gerard M. Schmid, Michael D. Stewart, Sean D. Burns, C. Grant Willson, “Mesoscale Monte Carlo Simulation of Photoresist Processing,” J. Electrochem. Soc., Vol. 151, pp. G155-G161, 2004.

Chris Mack developed the PROLITH lithography simulation program and founded FINLE Technologies in 1990 to commercialize it. He served as VP of lithography technology for KLA-Tencor for five years after its acquisition of FINLE Technologies in 2000. He currently writes, teaches, and consults in Austin, TX; email [email protected].