by Bob Haavind, Editorial Director, Solid State Technology
The future of lithography from the viewpoint of a major foundry was presented at ConFab by K.K. Lin of Chartered Semiconductor of Singapore.
Keeping scaling alive will required concurrent advances across a trilogy of disciplines: physical lithography; computational lithography, including OPC/RET (optical proximity correction/resolution enhancement techniques); and DFM (design for manufacturing), according to Lin.
Foundries, which must deal with a wide range of designs, often in limited volumes, must analyze lithography options from a somewhat different viewpoint than major microprocessor and memory makers with their multi-million volume production. High throughput is critical for high-volume fabs, but may not be as vital for shorter runs, for example.
Lin summarized some of the strengths and challenges for the major lithographic tool contenders for the next few years.
The huge advantage of high index immersion lithography is that it is still optical, so it can draw on decades of experience as well as an existing infrastructure. To push the NA higher than 1.35, high-index glass will be needed for the last lens in the stack to keep frequency gating within a high-index fluid. One candidate material is LuAG, which might have the potential to reach an NA of 1.65, Lin suggested, citing a presentation by Harry Sewell et. al. at SPIE 2007 (though Sewell’s talk at the 2008 SPIE suggested more time is needed to improve this lens material). Another concern, Lin added, is potential fluid-lens contamination.
Extreme ultraviolet (EUV) lithography would operate at 13.5nm vs. the 193nm of today’s ArF stepper/scanners, making high resolution possible without computationally intensive RETs. Unfortunately, Lin pointed out, there remain a number of serious challenges to be overcome, such as low throughput, linewidth roughness (LWR), resist outgassing, achieving defect-free reticle blanks, optical mirror lifetime, and flare. An important tradeoff that must be balanced, he noted, is meeting the need for high resist sensitivity without inducing shot noise effects that would result in more severe LWR problems.
Another candidate is imprint lithography, using a step-and-flash imprinting process under development by Molecular Imprints. This approach would have the advantage of high pattern fidelity with the LWR only limited by the template. There are challenges, however, Lin pointed out, with potential mask damage and contamination and defect levels. Also, maskmaking would be expensive and a 1X template would be much less forgiving than a 4X optical mask. Overlay would also be a challenge.
A fourth candidate that is intriguing for foundries is maskless lithography using a multi-column beam system. Lin cited a system under development by IMS and Mapper Lithography NV, in which there would be no mask cost, and diffraction limitations would be negligible. The technique could prove useful for small volumes and application-specific ICs (ASICs), for example. But Lin pointed out that throughput is still far from practical for commercial production, One possible solution, he suggested, is a multiple-electron-beam-direct-writer (MEBDR). High resolution was illustrated with an image of printed features down to the sub-10nm range provided by G. Bernstein of the U. of Notre Dame — but Lin suggested that shot noise limited CD uniformity could be a serious concern.
The next generation of OPC/RET methods was also discussed by Lin. The scope of OPC is expanding beyond optics to other process effects, such as etch proximity correction or topographic effect correction, for example. Traditionally the goal of OPC was to print the geometry on silicon as close to the designed features as possible. In the future it may be necessary to model the process window as a factor, he explained, so that some geometries might have to vary some from the design in order to have lithography printability.
Methods are being investigated to deal with the long calculation times for many physical effects beyond lens optics that have been formulated as mathematical polynomials. Many efforts involve using more physics-based modeling in areas such as mask three-dimensional effects, resist process effects, and etch process loading effects. Precision is also being pushed, Lin explained, pointing out that even a 1nm error will eat up half of the CD error margin for 22nm technology.
Corrections have traditionally been edge-based, but the complexity of design is requiring a move toward pixel-based grid type simulation to reduce calculation times.
The quality of OPC depends more and more on design style as features shrink, requiring closer interaction between designers and the foundry to deliver OPC-friendly designs. The intensity of computation that will be required will mean that over 1000 CPUs might be needed to finish one job. A specific CPU configuration using an FPGA chip could slow down the total scale of effort for the hardware needed. To meet the need for future hardware-accelerated OPC engines, one major microprocessor maker now has a tapeout center with supercomputing capabilities to meet this need, Lin commented.
A move to EUVL will mean a new type of optics, which will require physics-based modeling. Even though EUVL will offer a very high k1 factor, the requirement for accuracy will be very tight, Lin pointed out. OPC will still be needed with EUV, he believes.
He also emphasized the growing opportunity for implementing design-for-manufacturing (DFM) techniques as the shrink moves toward the 32nm to 22nm range, when computational needs will increase exponentially, as indicated in Lars Liebmann’s SPIE 2008 presentation. There will need to be much tighter design and process co-optimization, according to Liebmann, to move from the schematic design to functional data that will meet device parametric requirements. Chartered is working on what it calls “soft DFM,” Lin said, where there are not so many design constraints, but at 32nm and 22nm, the company may have to go to “hot DFM,” with tighter design constraints.
One result may be ultra-regular layouts even for high performance logic. Lin showed an example of such a regularized layout that matched the performance of a 65nm IBM PowerPC 405 processor core compared to a traditional standard cell design for the same die area. The manufacturing benefits were simulated, according to Liebmann, with fewer hot spots and less variability, yet with designability that appeared not only acceptable but actually very favorable.
Lin also echoed the point by Toshiba’s Kinugawa point that the time will be extending before early adapters go to the most advanced nodes. This will increase the business of what he called the “masses” for more trailing-edge technology.
It will require steady advances in all three of these critical areas if future lithography is to meet all requirements for continuing feature shrinks into the next decade, according to Chartered’s Lin. — B.H.