ITC recap: How smart does silicon need to be?

by LeRoy Winemberg, Freescale Semiconductor, and Ken Butler, Texas Instruments

November 3, 2010 – As process geometries drop below 65nm, the differences between design models and manufactured silicon becomes unacceptably large. Material variability increases dramatically, and the results can be very unpredictable as far product performance. Up to this point, the industry-wide solution has been to use large guardbands to compensate for the delta between models and silicon. However, the use of excessive guardbanding is expensive as it tends to leave a lot of performance on the table, makes timing closure more difficult, is typically inaccurate, and in the end usually leads to lost revenue.

Click to Enlarge Click to Enlarge

In a panel held at this week’s IEEE’s International Test Conference (ITC), "How smart does our silicon need to be," experts from Texas Instruments, Freescale Semiconductor, IBM, the U. of Connecticut, and the U. of Minnesota discussed methods that have been tried to help close this gap between model and silicon reality. These approaches include better pre-silicon characterization techniques and data collection, static timing analysis (STA), more recently statistical STA, and others. The consensus: with increasing random variability and increasing impact of aging effects on design reliability a gap exists that is expensive in terms of time and money.

All panelists agreed that a new approach is required to solve this problem and that a viable solution is the use of a variety of "sophisticated" on-chip sensors/monitors — i.e., embedded circuits that are more than just simple ring oscillators — to collect data from the manufactured silicon itself. The benefit of this approach is that the data collected on-chip by these circuits/sensors could be used to tune the design models for subsequent designs as proposed above.

Profs. Sachin Sapatnekar (UMinn) and Mohammad Tehranipoor (UConn) proposed this idea be taken a step further, with these sensors used by the design to adapt itself to aging/reliability effects (which can vary both temporally and spatially but not always in a uniform fashion), as well as process variations, to enable continued operation at an optimum or near-optimum point. Also, self-correction could possibly be used either at manufacturing test to increase yield, or the field to increase quality, or both.

One downside of this approach, pointed out Gordon Gammie of Texas Instruments, is the area overhead of these small circuits and their support infrastructure. If these self-tuning circuits are either inaccurate or designed improperly (i.e. without a proper understanding of their impact on the design of the "resilient" circuit itself), the financial benefit from this sensor-based self-tuning/adaptation can be lost. For example, what controls should be available to the sensor/monitor self-adaption/tuning? What about supply voltage, frequency, adaptive body bias or control of islands? And there are likely many other issues. The wrong balance/mix could result in the loss of performance, power, and reliability.

Panelist Phil Nigh of IBM was bullish on the use of these embedded circuits. He felt this idea can be further extended so that these embedded circuits can also be used for design characterization, manufacturing debug, and diagnostics — and some fraction of these circuits could even be provided to the end customer for debug and characterization at the card and system-level. Standards would be needed for connection and control of these on-chip instruments, such as the proposed P1687 standard from IEEE.

In closing, these panelists called for more research across both academia and industry on this cutting-edge topic, and the best approach for sub-65nm silicon designs. Questions under consideration are clear: which embedded circuits make the most sense (aging, enablement of more aggressive design, characterization, debug, etc.), is standardization necessary and if so how much, or are these embedded circuits not necessary in the first place because there are better approaches to the problem?


LeRoy Winemberg, ITC panel coordinator, is design-for-test manager in Freescale Semiconductor’s microcontroller solutions group. Kenneth M. (Ken) Butler, ITC panel moderator, is the chief design-for-test technologist in Texas Instruments’ analog engineering operations group.

POST A COMMENT

Easily post a comment below using your Linkedin, Twitter, Google or Facebook account. Comments won't automatically be posted to your social media accounts unless you select to share.