IBM: 22nm node will need “computational scaling”

by Pete Singer, editor-in-chief, Solid State Technology

Sept. 18, 2008 – The complexity design and process interactions at the 22nm node are so great that they will require a new level of computational power. That was the message from IBM yesterday, as the company announced a “computational scaling” solution at its 300mm fab, Fab 323, in East Fishkill, NY.

Computational scaling, as described by the IBM execs, is essentially an end-to-end look at technology development, and how to accept designs into technology for manufacturing. “A few years ago we started focusing on the challenges of the advanced technology nodes and their impact on design,” said Kevin Warren, director of design and technology integration at IBM’s Semiconductor Research and Development Center (SRDC). “The computational scaling ecosystem that we’re announcing today is really one that exploits the advantages of the bigger IBM, in high-performance computing and mathematical optimizations, with the semiconductor processing expertise, paired with our partners.” These include Mentor Graphics and Toppan Printing as partners, with more to follow.

In a statement, IBM described this new approach as an ecosystem incorporating several components: a new resolution-enhancement technique (RET) that uses source-mask optimization (SMO); virtual silicon processing with TCAD; predictive process modeling; design-rule generation and corresponding models; design tooling and enablement; complex illumination; variance control; and mask fabrication, along with necessary partnerships.

Subramanian (Subu) S. Iyer, chief technologist in the SDRC’s systems and technology group, offered that this is the first step on a journey to be “ready when 22nm is ready for prime-time ramp” around the 2011 timeframe. “We will have this structure and this ecosystem ready.”

The main impetus for the new approach is a recognition that standard lithography approaches are facing fundamental limitations. “We have pretty much reached the limit of straightforward use of the 193nm immersion as we go beyond the 32nm node,” Iyer said. “For a variety of very practical reasons, there is no lithography exposure tool on the horizon in time at least for 22nm.” The idea behind computational scaling is that these limitations can be overcome by using mathematical techniques to modify the shape of the masks and characteristics of the illuminating source at each layer of an integrated circuit.

One aspect that received the greatest emphasis in the new approach is lithography, which requires a “deep understanding of the interaction of design, masks, silicon, resists, and so forth,” noted Iyer. The focus on computational litho is at the backend to create mask shapes, where advances will include optimizing what’s on the mask with the illumination system, added Warren. “There will be a much tighter interaction and much more flexible look at how we need to manipulate shapes and how we can do a co-optimization of the shapes with the illumination system,” he said. He also pointed to work done with Rensselaer Polytechnic Institute to simulate conditions in a “virtual fab,” to understand the implications “for what we can print and what it’s going to wind up looking like on wafer.”

Another major aspect to computational scaling is design technology co-optimization, developing ground-rules early on in the development. “We’ve developed some automation techniques in that area so we can do that in a more robust, fully visible way,” Warren said, and then “we really look at what we need to do to that design to make it manufacturable.” — P.S.

POST A COMMENT

Easily post a comment below using your Linkedin, Twitter, Google or Facebook account. Comments won't automatically be posted to your social media accounts unless you select to share.