Issue



Learning the secrets of design for yield


07/01/2013










Dr. Zhihong Liu, 

Dr. Zhihong Liu,

Executive Chairman, ProPlus Design Solutions


Random process variations and layout-dependent effects are a fact of life for designers working at the more advanced process nodes and become critical at 45nm. Besides random and systematic variation effects, reliability effects, such as bias-temperature-instability (BTI), also become prevalent, introducing another dimension of variations that impact parametric yield.


These variations are unavoidable and, in fact, increasing as we move to more advanced nodes, where circuit designers encounter yield problems and need to spend extra effort on variation analysis for yield and performance trade-off.


On one side, foundries have to double or even triple their efforts to make complicated model libraries to characterize different types of variations, despite having to cover variation sources across the full statistical space???an impracticality.


Conversely, efficiently running variation analysis with the best use of foundry models becomes critical for circuit designers, and is one of the more challenging aspects of system-on-chip (SoC) design that project teams face daily.


This means design-for-yield (DFY) considerations are more important than ever. And yet, we as an industry may not fully understand device modeling and its impact on DFY results. This is due in large measure to no clear definition of DFY. Some people are confused by DFY and design-for-manufacturing (DFM), and consider DFY a foundry's responsibility or do not know what the role of DFY is. The value of DFY highly relies on how "good" the foundry models are and how efficient the tool can leverage model information to run needed analysis, such as statistical circuit simulations.


Foundry models can never be perfect, but represent process information that a DFY analysis requires. Designers need to have an appropriate expectation on models, especially for advanced technologies, and also understand model limitations. With this understanding, extracting information from models and making good use of this information together with DFY tools is becoming more critical.


Of course, DFY is not a new phenomenon and tools being categorized for DFY have been available commercially for some time now. They haven't been widely adopted because they have not provided enough value to project teams due to the lack of information or confidence in the analysis results. Statistical simulation, such as Monte Carlo analysis, has been costly and time consuming, even for a 3s problem. Designers either skip Monte Carlo or often run a small number of samples that can limit the confidence level, making DFY analysis results unreliable and less valuable. Other types of analysis, including process-voltage-temperature (PVT) analysis, also run into similar problems if designers want to cover all corner cases that can easily increase up to hundreds of corners. A faster simulation engine, intelligent statistical analysis algorithms, and better use of foundry model information are the key components that EDA companies need to provide to make DFY tools more practical and reliable.


The final key would be on the application side. Circuit designers need to understand when and where they can apply DFY on top of their traditional design flow, and how to leverage DFY to achieve an optimal yield versus performance-power-area (PPA) trade-off. During the 50th Design Automation Conference (DAC) in June, a panel of foundry experts weighed in with their opinions: Dr. Min-Chie Jeng from Taiwan Semiconductor Manufacturing Co. (TSMC); Dr. Luigi Capodieci from GLOBALFOUNDRIES; and Dr. Bruce McGaughy from ProPlus Design Solutions, Inc. The panel, moderated by SST Editor-in-Chief Pete Singer, shared best practices and techniques to manage these sub-nanometer effects to improve manufacturability and yield.



Solid State Technology | Volume 56 | Issue 5 | July 2013