by M. David Levenson, Editor-In-Chief, Microlithography World
Much of the technology discussion at SEMICON West focused on the next shrink, either 45nm or 32nm half-pitch — and how the industry is at an inflection point, where material innovations, not reduced dimensions, must carry the burden to increase performance, and geometrical shrink perhaps lower costs instead.
Photolithography was not a hot topic at SEMICON West, possibly because the first 45nm-capable immersion scanners had just been shipped and no one had any real insight as to how 32nm might be achieved economically. Ben McKee of TI worried that the lithography cost per level will explode after today’s single-exposure immersion technology becomes obsolete, no matter what comes next.
Discussion at a Sokudo-sponsored “Lithography Breakfast” centered on litho cell productivity, and whether it’s the scanner or the track that limits throughput. Today, scanner vendors claim enormous throughput (~180 wafers/hr) that is rarely achieved in production. In his keynote, Michael Lercell, litho manager at SEMATECH, noted that the drop in the value of k1 had resulted increased process complexity, with 3-layer resist stacks becoming standard for hyper-NA immersion. Different levels require different recipes and processes, increasing the complexity of litho-cell operation. Even when EUV replaces optical lithography, different levels will require different processes. Thus flexibility is and will be the key.
Charles Pieczulewski of Sokudo gave the track-maker’s perspective. Historically, tracks were designed to be 20% faster than the exposure tools. If a track is linked to one stepper, it loses throughput when different lots require different processes. Today the highest throughput occurs in 200mm fabs, so a track designed for 300mm may not be keeping up with demand. Sokudo’s latest RF3S track would deliver 180 300mm wafers/hr — and the challenge will be to raise that to 240 wafers/hr in 2015 when double patterning needs to be done.
Pieczulewski suggested the radical step of de-linking the exposure tool and track, which would free the tracks to achieve the 300 wafers/hr throughput of other stand-alone tools, he predicted. De-linking also would allow different tracks to be optimized for different levels or processes, increasing flexibility and reducing overall downtime. He claimed that today’s resist with their topcoats, etc. are much less sensitive to delay than the early chemically amplified resists which required linking in the first place. At worst, only the PEB step might need to be linked directly to the scanner, to limit CD variability. The rest of the process could then take place in stand-alone coat, develop, and metrology systems. Keeping the FOUPs flowing would then only be a software problem.
A panel of exposure tool makers, however, seemed deeply skeptical. Phil Ware of Canon made the analogy of a sushi bar with some flavors circulating endlessly and others immediately consumed. Skip Miller of ASML also worried whether the flow could be managed with different recipes coming and going. Gene Fuller of Nikon expressed concern about letting wafers sit around, anywhere.
In a lunchtime response, Fuller described Nikon’s new KrF and ArF dry exposure tools — the NSR-210D and NSR-310F, which both claimed 174 wafers/hr throughputs. These machines employ “tandem” stages, like the Nikon immersion scanners, and thus benefit from the higher acceleration possible for a low mass wafer stage. The throughput king, however was the SF150, a true i-line stepper (with no moving reticle stage and a 26x33mm field) that Fuller claimed would print 180 300mm wafers/hr, even in slow i-line resists which do not require stepper-track linkage. Sokudo’s throughput challenge clearly has been accepted at Nikon, at least for mix-and-match applications.
Various forms of imprint lithography achieved greater prominence this year. Molecular imprints claimed its SFIL process was a “drop-n replacement” for photolithography, but with 2.6nm LWR (3σ) and zero bias at 32nm. Mark Melliar-Smith, CEO of Molecular Imprints, showed a 300mm wafer that had been printed with 26x33mm field using their Imprio 250 system. Overlay remains a challenge, apparently at the 20nm level, but newer, more controlled measurements should demonstrate progress at least as fast as EUV, according to CTO S.V. Sreenivasan. Molecular Imprint and other imprint litho vendors (e.g., Obducat and EV Group) also featured on full-wafer tools for other industries, such as high-brightness LEDs and medicine, with diverse technology approaches.
DFM in doubt?
A panel on design-for-manufacturability (DFM), hosted by analyst Gary Smith, highlighted concerns about designed-in variability, a subject also addressed by TI’s Ben McKee at the KLA-Tencor event. Smith noted that venture capitalists turned off the money spigot for DFM start-ups when they realized that the total market was more likely to be $150 million than $1.8 billion, and that DFM could take seven years to implement as “design for test” did. Still, the yield problem remains real and is unlikely to be solved by new equipment or EDA Band-Aids, he emphasized, noting that all 45nm designs so far use restricted design rules just to make the problem manageable.
Joe Sawicki of Mentor Graphics noted that DFM is an “outright necessity” at 45nm, and that the industry has to leave the “buzz wars” of the 65nm era behind and deliver tools that could automatically repair problematic structures — without breaking others. Richard Tobias, a designer and founder of startup Cake Technologies, noted that his people did not understand layout, but realized that they now owned the yield problem. He lamented that progress had forced digital designers to work on details like analog designers and that the time required to achieve success with today’s DFM tools was not predictable.
Nitin Deo of Clearshape — which was introduced as the one company that had delivered a DFM tool to designers — noted that one solution to the failure of the WYSWIG (“what-you-see-is-what-you-get”) paradigm is to change what designers see. That means not only moving from GDSII geometries to model-based silicon-accurate design, according to Deo, but also taking into account what a designer actually can do. Designers and fabs will have to work together to mitigate, co-optimize and control variability, rather than just handing around the problem. However, he said, the model-based variability analysis used by the designers has to be certified by the fabs before tape out, and that’s not easy when dealing with processes that are still evolving.
Lars Liebmann of IBM wound down the panel discussion by taking the client’s perspective. While OPC and RET were engineering disciplines performed by experienced professionals, DFM processes seemed to run by amateurs, he noted, and that didn’t seem likely to succeed entirely. While DFM has to be a formal signoff requirement (eventually), in his opinion the DFM kits have to work for the users without disrupting everything and must give quantifiable benefit in the form of improved yield and performance. Liebmann — perhaps the most persuasive advocate of restrictive design rules — seemed to doubt that the DFM segment was ready for the burden being thrust upon it. –M.D.L.