On the road again: Three shows highlight test, packaging, and APC
12/01/2000
ITC 2000: More test must be designed into chips
As features shrink at an accelerated pace, allowing multimillions of devices on a chip, and speeds move into the GHz range, testing problems are proliferating. More test structures must be built into chips, agreed test engineers at the recent International Test Conference in Atlantic City, NJ. But how that will be done, and how emerging test problems will be solved, were hotly debated. Some potential solutions did emerge from sessions and panels as well as in the exhibit hall. While manufacturing cost/function scales down with features, test costs don't. As a result, eventually (perhaps as soon as 2002) it could cost more to test a chip than to manufacture it.
With deep submicron processes, faults appear more in interconnects than in gates, pointed out Robert Aitken of Agilent Technologies in a plenary session. There are more layers with taller structures, with more interlayer than intralayer faults. This changes the test requirements. Cells are shrinking faster than I/O pads, making it harder to get at all the circuitry for testing, and "we don't model behavior of vias very well," he said.
Testing coverage must be extended all over a chip, and for best quality, all kinds of tests will be needed at different speeds. Speed is a serious problem "because we can't rely on testers being orders of magnitude faster than circuits under test," Aitken said. Timing problems can exist inside circuits, which may require built-in feedback loops and a variety of tests, such as low voltage and parametric measurements. Direct measurement may not be feasible so some extrapolation may be needed to determine if circuits are working right.
Stand-alone testers may not be adequate to find all faults; customized design-for-test (DFT) structures must be incorporated into the normal design flow, he suggested. Even missing a couple of faults in chips with millions of devices could be disastrous if they are in the clock, for example. At the 0.1µm generation, there can only be one bad device per billion to get enough die from a wafer, according to VLSI Research president Dan Hutcheson. This will require some combination of ATE and built-in-self-test (BIST) to make it work, which means new testing methodologies will be needed.
As chips move into the very deep sub-micron range, we are reaching the limits of automatic test equipment (ATE) resources, including number of pins, testing time, and memory, according to Yasuo Sato of Hitachi. Certain common faults in such chips metal wire bridges, highly resistive open vias, and gate oxide shorts, for example are difficult to detect with a scan-test approach using only a stuck-at fault model, and even delay tests and IDDQ tests may not find them. Sato laid out strategies for doing DFT using a variety of built-in self-test techniques. One panel session debated whether DFT would eventually eliminate costly ATE systems. While there was some support for this view, many speakers discussed how BIST and redesigned ATE systems, attuned to the BIST approaches used in the design, could work together.
There was general agreement that it is becoming essential to integrate test design into the design flow. Synopsys announced that it is integrating its DFT Compiler into its Physical Compiler environment, and that it is offering next-generation sequential automatic pattern generator (ATPG) technology in its TetraMAX ATPG tool. Tester companies such as Agilent and Teradyne are offering capabilities to speed up testing by simultaneously testing different cores within a complex system-on-a-chip, but using them requires circuit designers to incorporate circuitry, allowing each functional block to be isolated from all the others. Fluence Technology has been working with Infineon engineers to incorporate its jitter measurement BIST into an advanced 0.18µm CMOS process.
One intriguing concept suggested by Katz and Rojsuman of Advantest is to carry the simulation and verification environment in the design cycle into the test phase. A tester designed for this would be event-based rather than cycle-based, as are current testers. It would store changes in signal values for an event just as are observed in a design simulation. Events observed on each pin would be treated independently instead of synchronizing them to cycles based on time-sets. The architecture for such a tester would actually be simpler, eliminating much of the hardware involved in generating timed signals and storing timed waveforms. The authors applied the concept to the MicroSPARC processor used in the proposed tester and found they could prepare a test template in a half hour. It would have taken days to develop a test program on a conventional Logic Tester.
System-on-a-chip and other complex ICs will require BIST to narrow down the number of pins needed to do complete testing. Yet, chipmakers feel that test structures do not add value to an integrated circuit, so the silicon real estate for test circuits has shrunk in recent years, often to under 1%. Chip companies also do not like to pay the true cost for software to improve testing, according to VLSI's Hutcheson. He cited Cadence's Membist as one of the most viable software solutions, "but the company throws it in free."
Agilent has set up an SOC business unit to focus on this emerging part of the business. The SOC market will reach $8 billion by 2005, according to VLSI Research projections. Motorola has settled on Agilent's 93000 series test system for a wide range of SOC devices, according to Tom Newsom, VP of the new SOC unit.
Virtual testing, using software like that from Integrated Measurement Systems (IMS), is helping test engineers and designers work together. Bob Waller, product and test development engineer for IBM Microelectronics, said "virtual tools allow test people to show designers why a design will not work on a tester with fact-based conversations." The IMS software can emulate any commercial tester to allow such interaction.
Wafer-level testing promises to find faults, weed out bad devices, and even help sort chips by speed capabilities, more efficiently than testing individual die. TEL, in a joint project with Motorola and W.L. Gore, showed a wafer-level test system it is developing for known-good-die testing. It will be a difficult process to work wafer-level test systems into the manufacturing flow, according to Pat Kiely, TEL's business unit manager for test systems, because every group, from design through manufacturing, testing and packaging, must work together to make it practical.
Teradyne featured its J750 ATE system for wafer testing of 32 DRAMs simultaneously on a wafer, and there are plans for simultaneous testing of 64 DRAMs, according to Doug Elder, marketing manager for Teradyne's Integra division.
Getting enough knowledgeable test engineers is also becoming an industry-wide problem. One test manager said that last year, after his company finally allowed his test engineers to work with the design group, the test engineers all left his group to become designers. A Stanford professor stood at one panel session and said he had surveyed all the EEs from a recent graduating class, and found they all wanted to be either chip designers or dot-com entrepreneurs.
"Not one of them wanted to become a test engineer," he said. R. H.
AEC/APC Symposium: Accelerated roadmaps don't dampen optimism
Bob Helms is looking for a new type of roadmap to predict when the true progress in the semiconductor world will diverge from existing industry roadmaps. Helms, director of silicon technology research and a VP at Texas Instruments, sees his customers already pushing IC requirements ahead of the 1999 ITRS, but his faith in the industry's ability to meet the challenges is undiminished. His mostly serious comments were part of the keynote address at the recent Advanced Equipment Control/ Advanced Process Control Symposium sponsored by Sisa (formerly Semi/Sematech), International Sematech, and the Fraunhofer Institute at Lake Tahoe, Nevada.
The accelerated customer-driven roadmaps at TI show microprocessor clock frequencies doubling every three years not four or five and the embedded memory density doubling every 18 instead of 24 months. The only way to do this within the required cost targets, according to Helms, is with advanced process control (APC). Most significant ways for cost reduction in parallel with more complicated processes have been used up, and "fab productivity is the only major one left with more room for improvement," said Helms. The productivity challenge is complicated by the current and upcoming revolutions in processing, such as Cu interconnect, a new low-k dielectric material every 18-24 months, new gate insulator materials, and a move to SOI, which he saw as being mainstream in five years. The required upgrade from fabrication to manufacturing is summarized in the table.
|
A key area of APC is run-to-run control, and presenters from AMD and TI gave some examples. Jerry Stefani of TI's Kilby fab in Dallas described the company's oxide CMP process control approach. The current flow includes a test polish on look-ahead and pilot wafers using a polish time calculated with a design-dependent "sheet film equivalent" (SFE) metric that resides on the system and reflects the topography of a particular device. An in-line metrology system measures the thickness and revises the process if necessary for the lot of wafers. The use of a test polish step shows that not all of the goals in Helms' table are being met yet, but Stefani said that the step is still needed because of unique development lots that run through the fab. One intermediate solution will be to begin the polish of the production lot with low-risk wafers while the metrology and process correction is occurring on the pilot wafer. This would allow the tool to run at sprint rate, with only low-risk wafers moving ahead before the correction occurs.
At AMD, run-to-run control of shallow trench isolation (STI) etch is accomplished with an adaptive control scheme that tracks the silicon etch rate. The nitride deposition thickness data is fed forward to determine the correct silicon etch rate, and the etch time for a wafer is based on the outcome of the previous wafer. Since the etch rate is dependent on the reticle used for STI patterning, only lots with the same chamber and reticle are used to calculate the observed etch rate. Anthony Toprac of AMD reported a 40% improvement in Cpk with the implementation of this STI etch run-to-run control scheme.
John Stuber of TI described run-to-run control of CDs by manipulating photolithography exposure settings, but he also raised the more general issue of measuring the value of APC. Traditionally, all products within the specification limits had the same value, but when looking at the fabrication of a wafer step-by-step, a process whose output is just inside one of the control limits is qualitatively not as good as one that is perfectly centered within the control limits. The challenge is to make that concept quantitative.
Stuber proposes using the Taguchi loss function, which assigns less value to something that is in spec but off-target. The function is an inverted parabola with the peak at the center and reaching zero at the control limits. A pure parabolic function might not be universally applicable what happens when the target is not centered between the control limits, for example but this approach could be a useful tool when trying to justify the cost of APC implementation in terms of the product outcome. J.D.
PackCon 2000: Packaging doesn't behave like a wafer fab
Assembly is a still a different type of business from the rest of the semiconductor industry, and data presented by keynoter Ron Leckie at PackCon 2000 made it clear that any convergence of wafer fab and assembly is still pretty far in the future.
Leckie, the Saratoga, CA-based CEO of the consulting firm Infrastructure, pointed out that the ITRS99 says the main challenge for packaging in the foreseeable future is cost reduction. "Cost on a technology roadmap?" was Leckie's rhetorical question. Cost issues appear in the wafer fab process areas of the ITRS, but they are part of a much larger equation. Junction depths, linewidths, and other measures of process performance are the prominent drivers for wafer processes, but in the packaging realm, performance and density are secondary to cost. Perhaps the semiconductor industry still sees packaging as a necessary evil to be minimized whenever possible, instead of an area of potential value.
The trends for assembly equipment purchases are also different from those for wafer fab and test equipment, according to Leckie. He evaluated the percentage of chip revenues spent on various kinds of equipment on a monthly basis. For wafer fab equipment, the percentage is typically 10-20%, with recent peaks in mid-1996 at 22%, early 1998 at 20%, and early 2000 at 22%. The recent peak is in line with other highs. The same is true of test equipment, with the recent peak at about 4.7%, lower than the 5.5% seen in early 1998.
Assembly equipment purchasing veers sharply away from this, though. For all of 1996-99, the assembly equipment buy ratio was 1.0-1.8%, and it was more stable than the wafer fab equipment market. The first six months of 2000 were all between 2.0-2.7%, though, marking a huge and extended rash of expenditure on assembly equipment. Leckie noted that one of the first signals of upcoming shortfalls in revenue and profit came from Kulicke & Soffa, perhaps because of the new instability in the purchasing trends. He didn't expect a collapse in 2001, though, and termed the upcoming slowdown a "digestion recovery." The likely cause for the buying overshoot was "lead-time stretch," in which buyers make more speculative purchases because of the long lead times for many types of equipment these days, Leckie said.
On the technology side, some of the expected topics were prominent in the papers and exhibitions at PackCon. Several approaches to three-dimensional packaging, for example, were on display. Tessera, a chip-scale packaging IP provider in San Jose, showed a stacked CSP created by attaching two ICs side-by-side with TAB interconnection, and then folding one over onto the other. Valtronic SA of Switzerland had a paper on a similar 3D approach, in which multiple ICs are attached with a flip chip process to a thin, flexible, multilayer PCB. The PCB is then folded so that the ICs are in a vertical stack. This has been qualified for several applications, and the authors cite medical uses with very difficult dimensional constraints, such as hearing aids.
In a related area, SEZ America presented impressive data on wafer thinning. SEZ's wet chemical spin-processing approach was used to remove defects left on the backside of wafers that were thinned with standard backgrinding. Scott Drews, a SEZ field applications engineer, reported on work with STMicroelectronics that improved the strength of silicon wafers 25x by removing defects at the origin of cracks that propagate and break wafers during handling. The process has also been applied to GaAs, InP, SiGe, and other materials, with straightforward changes in the process chemistry.
A new set of software products introduced at PackCon by APT Interactive hopes to address issues that have plagued the packaging world for years. The Bangalore, India, company has created what they call "Knowledge Management" software, which, in simple terms, is a database of process information related to packaging. Engineers can enter into a very structured database information about a challenge that arose and how they addressed it. The information is then available for reference when a similar challenge arises at another time or place. This seems like the kind of system that every company should already have in place, but this problem of information not being propagated throughout assembly facilities within an organization historically has been widespread. J.D.