Issue



What's driving test?


11/01/2003







Solid State Technology asked industry executives to comment on what's driving the test industry and where it needs to go.

Cost-per-die pressures driving structural test

Risto Puhakka, VP, VLSI Research Inc., Santa Clara, California

The test sector will undergo significant changes over the next few years, driven by the increasing complexity of new system-on-chip (SoC) designs and technologies, new materials, and design shrinks. The industry's relentless march at a two-year/node pace puts even more pressure on cost-of-test (COT) and is imposing strict timing requirements on finding new ways of testing chips. Test does not scale with Moore's Law and, in fact, the problems grow as more transistors are put on a chip. At the same time, the use of new materials means new methods must be developed to catch new failure modes, and all must happen in an environment where test cost/transistor must go down or the "show" comes to an end.


Risto Puhakka
Click here to enlarge image

Because so much is integrated onto one chip, many of the challenges will come about because of SoCs and all integrated technologies. Additionally, each technology (e.g., memory, logic, and analog cores) has different test issues. Costs for testing the entire chip could be as expensive as the priciest link in the test chain.

Memory chips, whether they are volatile or nonvolatile, are normally tested in parallel, 64 devices at a time, compared to ASICs that are only tested 1–3 devices at a time. This creates a significant imbalance in test economics, making testing memory in SoC devices more expensive than it should be. SoCs also have analog components, which require direct-access, high-precision (i.e., expensive) ATE, vs. logic, which needs low-cost digital ATE. All these technologies are put together in a single chip, so testing in one pass could require an incredibly expensive tester. The alternative is distributed test, which can also be expensive and can result in incomplete tests.

Another challenge affecting the test sector is that defect types and failure modes are changing as new materials like copper, low-k, and high-k are introduced to the fab process. Some of the fabless suppliers have already learned this the hard way in copper implementations at 130nm. The biggest issue with copper is resistive opens, but the industry has been structured around testing shorts — the most common problem with aluminum. Add to this the fact that the latest chips have close to a billion vias, and the number of vias will continue go up over time. This is an enormous quality challenge to overcome.

These problems are very complex and in many cases, the industry is still learning the physics of how they occur and how to test for them. These are not the old CMP line cracking problems; they are complex resistive contact issues at the bottom of vias. Low-k and high-k dielectrics have yet to enter high-volume manufacturing, and little is known about their quality and reliability problems. After the problems with copper, one should expect that they will soon represent new walls for test engineers to scale.

Design shrinks are also causing gates to become thinner, resulting in more tunneling or leakage currents; even fluctuations in the number of atoms implanted in the junctions are starting to drive test. Additionally, in the drive to lower the test cost, the chip is stressed more heavily under test, so power management issues are becoming more problematic. The result is a higher proportion of transistors that must switch in unison than would otherwise be needed in end applications.

All of the issues listed are still solvable by implementing new test strategies, but they tend to drive up COT. The cost problem is also impacted by the transition to 300mm manufacturing, and fabs get the benefit of this change; test does not. Instead, COT remains constant with die size. Large wafers mean more die/wafer. Test costs are roughly 15% of a typical 200mm wafer's manufacturing cost and will rise to 25% for 300mm, even with improvements in technology, assuming that the fab cost/good die drops by 30%.

One solution to these problems is structural test, and the industry is beginning to agree. Structural test is designed to check whether the device was manufactured as designed, but what it really does best is provide a low-cost solution for testing hard failures, which take up the bulk of the time in testing. More costly resources can be focused solely on finding soft failures. Structural test includes a number of well-known tools that fit into the design-for-test (DFT) tool kit. These include logic BIST, memory BIST, analog core DFT, and boundary scan. Although functional testing is still needed for speed sorting, verification, and finding soft failures, the rise of structural testing is a massive change in methodology.

Problems associated with new materials and design shrinks can be solved, but conventional solutions tend to drive up COT. At the same time, the transition to 300mm wafer processing is changing chipmakers' cost model, exerting pressure on the test sector to match reductions being achieved in the cost/die. The combination of these issues is forcing the test sector to change in ways never seen before.

For more information, contact Risto Puhakka, VLSI Research Inc., 2880 Lakeside Drive #350, Santa Clara, CA 95054; e-mail [email protected].


Solving SoC test challenges

Sergio Perez, vice chairman of the Semiconductor Test Consortium and VP of sales, Advantest America Inc., Santa Clara, California

In the quest for continuous cost improvements, the test sector has not always benefited from the effects of scaling to Moore's Law. While expanding the parallelism of test solutions has helped lower the cost of memory test, it has not been a viable option in the testing of system-on-chip (SoC) devices due to higher pin counts and the diversity of functions inherent in these highly complex devices.


Sergio Perez
Click here to enlarge image

Instead, companies must use specialized testers for SoCs, or utilize general testers that can be device-class-configured. It is an even more complicated problem because new testers are required after one or two device generations, typically every 3–6 years. This is a huge challenge for test-engineering managers and manufacturing managers who need to control escalating costs while getting product to market quickly.

With dozens of different test platforms currently used throughout the industry, IDM, foundry, and test-house customers are held captive by the need to constantly upgrade their automated test equipment (ATE) or risk testing advanced devices with trailing-edge test systems.

Having to continually invest in new-generation test solutions is a costly proposition. This situation has created several challenges to the semiconductor industry. First, the relentlessly evolving device-testing requirements will significantly increase the need for new features that can render classical ATE systems obsolete in as little as one chip design cycle. Furthermore, traditional ATE system development, which typically takes from 15–18 months, does not have the response time or bandwidth to deploy new test solutions to meet rapidly changing device-testing requirements. This is especially true today, when some customers need a viable test solution in 90–180 days to meet customer deadlines. Lastly, the classical ATE environment imposes an architecture-specific learning curve both to the ATE system developer and to the device test engineer. This is unacceptable in an era of limited and expensive engineering human resources.

The ATE industry must develop a truly open and flexible test platform, while striking a careful balance among critical process objectives — cost, size, and performance — for the end users. Having a common and flexible platform, with an expected life of 10+ years will enable the outsourcing model, simplify the technical-resource development issue, and of utmost importance, reduce time-to-market, especially for SoCs.

To make fundamental improvements in cost, competitive compatibility, platform support, and performance scaling, component ATE must move from a proprietary platform architecture to an interchangeable modular platform that scales and encourages multiple-supplier participation. With a modular architecture, scalability will improve as upgrades and specialty solutions emerge.

A key attribute of test modules is their ability to act as independent tester sites. The flexibility of an open-architecture platform enables it to be configured in numerous ways without resorting to extensive re-fit or customization. Modules can be configured to act independently, together with other modules as multiple-encapsulated test sites, or as a single test site with larger channel count. Over time, improvements in device performance and integration would enable new test modules to drive lower cost and better capability.

Another economic consideration that makes a case for developing a single-platform model is that the outsourcing model is becoming more crucial to the viability of the industry. A single-platform model will enable a truly global industry, where borderless teams work with common platforms. Additionally, it is much more cost-effective than having dozens of different platforms to accommodate specific end-use applications, such as data communications, telecommunications, voice and image transmission, and home entertainment.

Looking ahead to the future of test, users will greatly benefit by investing in one architecture and purchasing custom, plug-and-play interchangeable modules to meet their specific and ever-changing requirements. The migration to a truly open architecture will take time, but it is vital for the future of this industry.

For more information, contact Sergio Perez, Advantest America Inc., 3201 Scott Blvd., Santa Clara, CA 95054; e-mail [email protected].