by Debra Vogler, Senior Technical Editor, and Phil LoPiccolo, Editor-in-Chief, Solid State Technology
Anyone trying to follow the topic of design for manufacturing (DFM) the last few years would be hard-pressed to accurately dissect the nuanced boundary lines that identify what each company that plays in the DFM space actually does and how users of DFM/EDA should evaluate their return on investment for the tools. ConFab panelist Joe Sawicki, VP & GM at Mentor Graphics, thinks we should look at the “DFM” label in a new way.
To help define DFM, Sawicki began by dividing up the landscape into three distinct categories:
– “Old school” tools, which include the traditional design rule checking, parasitic extraction, physical analysis, TCAD, MDP, and DFT tools;
– Manufacturing-for-design (MFD) tools, such as OPC, verification, and yield ramp, which are handled in the foundry and essentially try to give the designer “WYSIWYG” (what you see is what you get), out of the fab;
– Design-focused tools, such as yield analysis, lithography analysis, DFM routing, cell swapping, and chip polishing, which is a new section of tools generated around the 130nm node that use additional data to analyze the impact of manufacturing variability.
To help evaluate return on investment, Sawicki first used information gleaned from EDAC, Dataquest, and other sources to examine the revenue-side of the three categories of DFM tools and extrapolated where the revenue will go out to 2010 (see Figure, above). He indicated that through 2007, the majority of revenue will continue to go into the “old-school” products, but that these tools are on a mature revenue track, such that their revenue is substantial but growth is in the range of 10%/year. However, what has been growing at a rapid rate and is becoming a larger percentage of the overall business are the MFD tools. Meanwhile, the design-focused tools, although they are being introduced at the vast majority of customers for 65nm design, currently make up a relatively small percentage of the market.
Given the amount of attention paid to DFM around the 130nm node, especially in light of the massive critical yield failures resulting from the inability of litho systems to resolve features not identified by a design-rule check, the question that arises is, why didn’t these tools end up exploding in terms of usage by designers? According to Sawicki, the delay in the growth of DFM was the result of some major system improvements that occurred, particularly as the industry moved from 130nm to 45nm designs, including the following:
– Correction methodologies moved from being a mix of rule-and-model based implementations to being purely model based.
– The computation required per chip increased from ~4 CPU days/chip to ~500 CPU days, given increases in the complexity of the model, density of the chip, and so forth.
– Verification moved from sparse, rule-based methods to dense pixel-based implementations.
– Modeling went from a nominal process condition to a process window to enable greater control of overall variability.
– Calibration methods moved from doing simple CD measurements on 200 points to contour measurements on up to 20,000 points.
– Repair techniques were put in place so that when verification found an issue, rather than relying on hand processing, the computational lithography system itself made the repair.
– Scanner technology has improved NA from 0.7 to 1.2, thanks to immersion lithography.
These improvements have driven the increase in the market for MFD tools. “However, the stressful part is that if you look at scanner technology,” Sawicki said, “it is difficult to put a roadmap in place that shows us having anything other than a 193nm wavelength lithography tool with 1.35NA all the way through 22nm.” When you do so, he said, it becomes difficult to get yield entitlement for unconstrained design topologies.
Showing a process-window verification of a random design topology, he demonstrated that at 45nm, using single exposure with a k1 value of 0.45, the design would be fairly reasonable to resolve — but at 32nm, a single exposure technique on unconstrained design topology would be problematic at best. “You will have to go to double exposure with all the attendant cost increases, and so suddenly this is where restrictive design-rule techniques and litho-simulation checks become critical,” Sawicki said. Moreover, at 22nm, “all bets are off, even with a double exposure technique,” he added. “MFD tools are going to be needed to get the best possible image out of the light source, while the DFM tools will be required to ensure chip yield, given the constraints of that overall lithography system for both the scanner and the computational litho system.”
Some of the techniques haven’t been as much a part of the news on DFM, such as design-driven metrology or physical-based tests, Sawicki said, noting that these are able to put together data that has existed for years in new and novel ways. He gave one example in which a company that had been implementing rule-based OPC at 130nm did a yield-entitlement study using critical area of all their cells as well as a diagnostic study of their test failures to understand which cells were driving failure. They discovered that one cell had a significantly higher failure rate, which led them to discover an issue with their rule-based recipe. Fixing that problem drove up yield entitlement for all their designs. This process resulted in a very attractive ROI because it used existing tools and existing methodologies, he explained.
The term DFM can mean different things to different people, said Sawicki, who defined the terminology of this evolving technology. In the broadest sense, he said, DFM is not a tool, but a multi-disciplinary design integration process that is delivering value and return-on-investment — and it will continue to undergo significant change, especially as the industry transitions from 32nm to 22nm designs. — D.V., P.L.