Issue



Fifty years on, IC industry’s greatest invention is itself


11/01/2007







EXECUTIVE OVERVIEW

Founding a semiconductor magazine, or company, in 1957 required a leap of faith. Yet for more than five decades, the semiconductor industry has exceeded all supposed limits to its growth and innovative potential. In the process, it has continually reinvented itself, from delicate hand assembly of the first ICs to today’s highly automated manufacturing.

Starting a new business has never been easy. In 1957, when Sam Marshall launched this magazine, then known as Semiconductor Products (Fig. 1), no one yet knew what the semiconductor industry would become. Looking back, there is a certain air of inevitability to it all: in 2007, we know the answers to the insoluble technical problems and have progressed beyond many of the wrong turns and blind alleys. But from the perspective of 1957, the only clue to the future was that the transistor, developed just a decade before, had already created a cluster of suppliers and a handful of important applications. Companies such as Texas Instruments, one of the first advertisers in the new publication, were growing steadily. Designers were beginning to choose transistors over vacuum tubes.


Figure 1. The first magazine cover.
Click here to enlarge image

Still, transistor product yields languished in the low single digits, putting the cost of the new components well above that of comparable vacuum tubes. Big changes were on the horizon, however. IBM’s model 608 calculator, the first solid-state computing product, was announced in December 1957 [1]. It’s likely that Marshall, like most entrepreneurs, faced the first few months of his new venture with hope, enthusiasm, and no small amount of trepidation.

Those emotions were common throughout the early semiconductor industry. In interviews, most early company founders admit they hoped to build a successful business, but never expected to be part of an industry that would change the world. As ESI CEO Nick Konidaris put it, the IC industry has leapfrogged every reasonable expectation, reshaping the very nature of work. Looking back, though, the story of the industry is one of relentless attention to seemingly insignificant details rather than world-changing revolutions.

The 1960s: Defining the process

The first decade of Semiconductor Products (which became Solid State Technology in 1968) (Fig. 2) saw the development of essential methods for predictable manufacturing. It’s easy to forget just how crude those early facilities were. In 2007, fabs expect yields in excess of 90% from devices with critical dimensions smaller than 100nm. In 1967, the first MOS memory stored 64 bits, and yields as high as 10% would have caused joyous celebrations. High throughput manufacturing, speed binning, and optimized asset utilization all came later. In the early days, it was hard enough to simply make devices that worked.

One of the most important fundamental technologies was the purification and growth of silicon itself. The 1960s saw silicon displace germanium as the semiconductor material of choice. And rising demand led to a corresponding proliferation of investments in silicon technologies. By the end of the decade, the commercial silicon supply was mature enough for new companies such as Intel to turn to merchant suppliers rather than growing their own crystals as the very first semiconductor device manufacturers had done. This shift from carefully guarded internal secrets to commercially available products became a recurring theme in the coming decades.

As integrated circuits (ICs) became commercially important, stability and reliability were critical. MOS transistors, first proposed in 1925, relied on silicon’s unusually stable native oxide, rather than the finicky interfaces of junction transistors. As majority carrier devices, they were less vulnerable to thermal noise. They even raised the tantalizing possibility of power savings through complementary logic, which could turn on only about half the transistors at any given time. Unfortunately, no one knew how to maintain gate oxide integrity. Metal ions trapped charge in the oxide, caused shorts, and generally made everyone’s life miserable.

The RCA clean, introduced in 1965, was part of the solution. It was an obnoxious mixture of ammonia, hydrogen peroxide, and strong acids that both removed ionic contaminants and encouraged engineers to protect their particle-generating bodies with gloves, safety goggles, and the ancestors of today’s cleanroom suits.

By this time, several different laboratories were working on ICs, and another recurring industry theme began to emerge: with many engineers working on the industry’s problems, even the seemingly intractable ones, solutions would only be a matter of time. As the story goes, RCA had learned that hydrogen annealing stabilized the silicon/oxide interface, while Fairchild had figured out how to prevent sodium contamination. Within weeks of the first IEEE Silicon Interface Specialist Conference (Las Vegas, 1965), both companies had produced working circuits.


Figure 2. 1968 issue of the newly-named Solid State Technology.
Click here to enlarge image

The 1960s also saw the beginning of the rise of Silicon Valley as the center of the IC world. Though Bell Labs was in New Jersey and IBM was in New York, many of today’s industry leaders trace their roots to the “traitorous eight” who left Palo Alto’s Shockley Laboratories in 1955 to found Fairchild Semiconductor and who later went on to found such companies as Intel. As Silicon Valley grew, so did the advantages of being in Silicon Valley, where you might solve a problem in a casual discussion with your neighbor, or find funding through a chance meeting at lunch or after work at the Wagon Wheel.

Though the East Coast, particularly the Route 128 corridor around Boston, might have had more engineers, Silicon Valley concentrated the expertise in a very small geographic area. “In Silicon Valley,” Ultratech CEO Art Zafiropoulo said, “you could change jobs without changing car pools. Back East, when you changed jobs you had to move your family.”

The competition was fierce, but the pie was getting bigger all the time, and everyone knew everyone else. For quite some time, there was a shared sense of how far it was permissible to go. One early Applied Materials employee told me-on condition of anonymity-about climbing the fence to an adjacent facility late one night. They had run out of nitrogen for purging a vacuum chamber, and “borrowing” some from the neighbor seemed like the best solution. The same person was so overcome with guilt that they went back in the front door the next day to warn the other company that the nitrogen supply line might be contaminated.

After high quality silicon and good surface preparation, a third requirement for large scale manufacturing of ICs was the ability to handle large quantities of the extremely hazardous gases and chemicals involved. Silane, for instance, is pyrophoric, prone to spontaneous ignition in air. Vapor deposition of silane-based films literally could have explosive results. As Applied Materials founder Michael McNeilly explained, in order to sell process equipment, a vendor had to demonstrate that it knew how to handle the materials. Blowing up a customer’s fab tended to discourage repeat business. Thus, the first product introduced by a small equipment company named Applied Materials was a silane gas panel.

As the 1960s turned into the 1970s, products like FET memories, first used by the IBM 370 computer in 1968, and the Intel 4004 microprocessor, introduced in 1971, began to draw a line between electronic systems and the ICs that used them. You didn’t need to know anything about process technology, and not much about transistor behavior, to design a system around these pre-packaged components. Once again, expertise that was previously kept in house was becoming available in the open market.

Even these early ICs made modular design possible. The IC, with its clearly delineated package of predefined functions, allowed system designers to move up to a higher level of abstraction and to tackle more complex structures. At the same time, even in the early days, the steady advance of Moore’s Law meant that increasing functionality was available at the same or reduced costs. Early adopters of transistors, who had guessed that higher volumes would reduce solid state component costs, saw their gambles begin to pay off. Then, as now, system designers could give their customers a reason to buy the newest product every few years.

Once capable, reliable ICs were available at attractive prices, demand for them started to climb. Having laid the foundations of IC manufacturing in the 1960s, companies now began to focus on process improvement and “the shrink.” It wasn’t enough to leave process control in the hands of a few experts; it had to scale to many thousands of wafers.


Alive and cleaning

In 1975, I was meeting with a group of investors and was informed they weren’ t interested in investing in our technology- wet cleaning was dead and the industry was going all dry. I reminded them that the major IC makers were building new facilities with infrastructure to support vast amounts of DI water and acids, yet was told pointblank wet is dead. So now it’ s 30+ years later and nearly 100% of cleaning and stripping is wet-still looks pretty alive to me!

Joel Eftmann, co-founder and retired chairman of FSI International, established in 1973


The 1970s: Ramping production

Fortunately, 1971 brought the first SEMICON show, and Applied Materials went public in 1972. It was no longer necessary to assemble equipment from parts according to carefully guarded designs. While process recipes remained closely held secrets, basics like the composition of a cleaning bath or the design of a PVD chamber were now out in the open. Just as silicon became a commercial product once sufficient expertise existed outside the IC manufacturers, by the 1970s, equipment design was mature enough to allow the development of independent tool suppliers.

All of this is not to say that life got any easier for the IC manufacturers. As KLA-Tencor CEO Rick Wallace explained, the toughest challenges in IC manufacturing always lie just beyond the reach of current technology. When a new piece of equipment or a new capability arrives in the fab, it may solve problems and simplify manufacturing for the short term. But soon enough designers begin to use the new capabilities to improve performance. Before long the new equipment will also be operating at the outer limits of its capabilities.

The 1970s provided some of the first indications that ICs might have a significant impact on society as a whole. Back in 1968, IBM’s use of FET memories was a huge risk, but by 1977 the introduction of the Apple II, the Commodore PET, and the Tandy TRS-80 computers showed that an important movement was taking place (Figs. 3a-c). Although by today’s standards these early personal computers were little more than toys, they were radical in two important ways. First, they sat in an ordinary home or office environment, rather than being locked away in a climate-controlled data center. Second, they came fully assembled, ready to run right out of the box. They couldn’t do much yet, but the days of assembling a computer from parts and programming it by toggling panel switches were over.


Courtesy: http://oldcomputers.net
Click here to enlarge image

This sense of potential was not universally shared, however. As Ken Levy, Chairman Emeritus of KLA-Tencor observed in a published interview, “at that time [mid-70s] people believed the semiconductor industry had a death wish because it kept lowering its prices every year. It sold semiconductors for less and less money and how could anybody make money in a business where you lowered your prices and you delivered more value every year. There was a great debate on that. Even worse than the semiconductor company was a semiconductor capital equipment company because it was known it was a very cyclical business and if your customers weren’t making any money, how could you make any money?”

Computing began to become a tool for the masses, and as it did, the electronics and IC industries diverged. The integrated circuit industry focused on improved processes and more efficient manufacturing, while the electronics industry emphasized system integration and software. Chips themselves became, to some extent, commodities. Intel might have built the first microprocessor, but soon enough Motorola and others had them, too. Some components, like memories, became true commodities, with standard interfaces that made chips from different suppliers interchangeable. IC makers began to face competition both for initial design wins and for longer term manufacturing contracts. The rest of the world, notably Japan, began battering at the gates of what had been a predominantly American business.

The 1980s: Globalization and the birth of the modern IC industry

By the mid-1980s, several trends that would define the semiconductor industry as we know it today had begun to emerge. The first was globalization, exemplified by Intel’s exit from the memory business in 1985. Suddenly, the industry was bigger than a group of colleagues swapping stories at a Silicon Valley pub. It was an engine of national pride and national competitiveness.

Memory chips, with their array-like, repetitive designs, were at the leading edge of globalization because they reduced the importance of programming and logic design capabilities and maximized the impact of manufacturing proficiency. Companies with relatively little design expertise could still compete in the memory market. Increasing competition also inspired the beginnings of today’s emphasis on yield, process control, and asset utilization. With commodity designs, cost is the only differentiator, and the only way to reduce manufacturing cost is to improve these metrics.

A second major change in the 1980s was the rise of single-wafer processing, exemplified by the Applied Materials Precision 5000 platform, introduced in 1987 (Fig. 4). Single-wafer processing represents the triumph of process control over raw throughput: all other things being equal, a single wafer system’s smaller process chamber should improve uniformity, though usually at the expense of throughput. One way to increase throughput in these systems is to increase the plasma intensity and/or the process temperature, reducing the process time to make up for the additional wafer handling overhead.

When companies like Applied Materials started to do that, they discovered that arcing and plasma control put substantial obstacles in the way. In DC sputtering, for instance, any defect in the sputtering target can cause an arc, damaging the circuit. Advanced Energy’s Randy Heckman said that the first used frequency controllers and power supplies borrowed from radio and telecommunications. These systems were designed to run for long periods of time, modulating power to form the signal. Wafer processing, and particularly single-wafer processing, didn’t need modulation, but did need to be able to turn on and off quickly for arc control and process control. Special-purpose power supplies could be optimized for these very different operating conditions. Supplies designed for the IC industry could also be much smaller, without large capacitors for energy storage.


Figure 4. Applied Materials’ Precision 5000 single-wafer CVD system, indroduced in 1987.
(Courtesy: Applied Materials Inc.)
Click here to enlarge image

Unsuccessful ideas in the semiconductor industry sometimes don’t die; they just come back a few generations later. For example, germanium, used for the first transistors, was rejected in the 1960s because it proved too difficult to create stable interfaces, particularly for the all-important MOS gate oxide. Forty years later, SiGe wells are used in strain engineering, and the compound appears in some proposed double-channel transistor designs. Germanium is manufacturable now, in part because of the dramatic improvements in process technology that have taken place.

Two other trends that define the modern IC industry got their starts in the 1980s as well. 1986 brought the IBM PC convertible, the first commercial-volume, IBM-compatible portable computer. Unlike previous portables, which were best described as luggable and were often intended for military use, the IBM PC convertible was a true portable. A decade earlier, the first personal computer moved computing out of the data center. Now, the first portables took it on the road.

The final momentous development was the rise of pure-play foundries, with the founding of TSMC in 1987. TSMC made the then-radical claim that design and manufacturing could be as separate from each other as, say, IC manufacturing and silicon refining. TSMC, like the rise of non-US-based chipmakers, demonstrated that process technology was readily available. You could succeed in the IC business by being very good at manufacturing or very good at design, but it was no longer necessary for both kinds of expertise to reside under a single corporate umbrella.

Through all of these changes, the basic circuit technology remained pretty much unchanged since the first MOS-based ICs. There were more transistors on every wafer, thanks to the sometimes heroic efforts of lithographers, and they were tied together by more and more layers of interconnect wiring. Still, the circuit structures would have been familiar to any engineer at that first conference back in 1965. In the 1990s, that was about to change.

SEMATECH, founded in 1987, played an invaluable role in helping the IC industry maintain its record of continuous improvement. Its research focused on pre-competitive technology, allowing IC manufacturers to pool resources to fund the basic research that new materials require. Technology roadmaps, first developed by SEMATECH in 1987 and since then by a series of national and international industry associations, summarized just what advances were needed and outlined some of the possible solutions.

The 1990s: PDAs, the Internet, and changing processes

On the market side, the Palm Pilot, launched in 1996, put real computing power in the palm of your hand (Fig. 5). Though the handwriting recognition scheme has since been largely displaced by miniature keyboards, at the time its speed and simplicity were a revelation. Desktop computers were ubiquitous by then, with enough power for serious technical and financial tasks, but calendars and address books still lived on paper for portability. With the Palm Pilot, electronically stored information could be as accessible as a pocket or purse.


Figure 5. The Palm Pilot 1000.
(Courtesy: Ryan Kairer, PalmInfocenter.com)
Click here to enlarge image

By 1997, as Solid State Technology celebrated its 40th anniversary, the magazine’s pages were full of articles about the impending transition to 300mm wafers. The first 300mm capable process tools were beginning to emerge. Fabs announced that they planned to use the larger wafers at the 0.25µm and 0.18µm technology nodes. Writing in the July issue, VLSI Research analyst Mark Stromberg listed 16 announced 300mm facilities, 12 of them production fabs (see table) .

Ten years later, we can see that it didn’t quite turn out that way. The wafer-size transition didn’t really take off until after the industry climbed back from the collapse of the Internet bubble. But in 1997, the industry was still recovering from the DRAM collapse of 1996; the Internet bubble was only starting to inflate; and the peak was still three years away.

Click here to enlarge image

Failure to follow predictions characterized these years in other areas, too. Change rarely came as fast as the technology optimists predicted, though a longer view reminds us how much change really did come.

Copper interconnects were just over the horizon in 1997. IBM announced its first copper-based production in December, at that year’s IEEE Electron Device Meeting. Copper-oriented process tools began to proliferate the following year (Fig. 6). 1997 was a year of rumors. Who would build the first 300mm production fab, and when? When would copper production begin, and how severe would copper contamination issues be?


Figure 6. An IBM 200mm copper wafer.
(Courtesy: IBM Corporate Archives)
Click here to enlarge image

Looking back, both these developments have been just as revolutionary as anticipated. 300mm fabs, with their large capital requirements and massive production capacity, have helped drive the continued success of the foundry model. Few products, and few companies, create enough volume to justify a 300mm fab by themselves. Though copper metallization was indeed a yield disaster at first-some reports placed early production yields below 40%-it turned interconnect fabrication upside down. Copper plating and CMP replaced aluminum PVD and etch, while dielectric etch became the key to interconnect pattern definition. Now, copper has even matured to the point where manufacturers are willing to consider using it for through-silicon vias, dropping copper nails through the sensitive transistor layer. CMP, a new technology in 1997, is now an established tool for structure definition, from interconnects to DRAM transistors.

Other technologies that were looming in 1997 haven’t had nearly the same impact, at least not in the way forecasters predicted. In 1997, as the industry was just beginning to adopt 248nm lithography for the 0.25µm technology node, most expected 193nm lithography to be a single generation solution, filling the gap at the 0.18µm technology node before the introduction of 157nm or extreme ultraviolet lithography (EUV). The lens fabrication challenges that eventually killed interest in 157nm lithography were still in the future, but the technology was still seen as unlikely to meet the needs of the 0.10µm technology node. The 0.25µm node was the first to depend on subwavelength device features, and few imagined that resolution enhancements would become so effective and ubiquitous.

Immersion lithography wasn’t on the list of alternatives in 1997, and lithographers worried that EUV wouldn’t be ready in time. Now, ten years later, EUV still isn’t ready, and many experts wonder if it will ever be cost effective. The use of immersion lithography has arrived, and the 193nm wavelength prints 65nm features, barely one-third of the exposure wavelength. If there has been a lithography revolution in the last ten years, it has been in the industry’s understanding of and dependence on resolution enhancement techniques. Optical proximity correction, the bane of the 0.25µm lithographer’s existence, has become far more complex and has been joined by a wide variety of phase shift techniques. Realizing 65nm designs in photomasks that will actually print has become a major challenge for software suppliers, maskmakers, and the developers of mask-inspection tools.

Dielectrics haven’t evolved as anticipated for either interconnects or the gate stack. In 1997, the International Technology Roadmap for Semiconductors (ITRS) called for interconnect dielectric constants to drop below 2.0 by now. Researchers focused their efforts on polymers, and were especially worried about compatibility with high-temperature PVD aluminum deposition processes. The introduction of copper eased thermal stability concerns, while the extensive use of CMP pushed water absorption and mechanical stability to the forefront. Carbon-doped SiO2 now dominates the low-k dielectric market, and dielectric constants for the 65nm technology generation hover around 2.5. One of the side effects of this relatively slow progress has been tremendous pressure on designers to work with what they have. Without a very low interconnect dielectric constant, it becomes more difficult to reduce power consumption, even as circuits drive power density to new heights. Intel engineer Krishna Seshan’s prediction (“Challenges in the deep-submicron,” SST, January 1997) that “reduced power consumption will become the main goal of both the circuit designers and the software engineers controlling the microprocessors,” has proven uncannily accurate. Consumer demand for powerful portable computers has driven the emergence of specialized low-power processes with different requirements and more relaxed dimensions than the technology-driven, high-performance category.

Though designers have risen to the occasion, the cost to the industry has been tremendous. Ultratech’s Zafiropoulo estimates that a timely, robust low-k dielectric could have cut the interconnect stack in half, saving billions in manufacturing costs.

Finding a replacement for the SiO2 gate dielectric has proven even more challenging. Though hafnium-based gate dielectrics for the 45nm technology generation were announced earlier this year, the 1997 edition of the ITRS expected them to appear by the 100nm technology node. Inability to scale the gate dielectric has forced logic manufacturers to slow their channel-length scaling. Memory and flash manufacturers have regained their status as the most aggressive pursuers of device density. 2007 thus represents a return to the status quo: logic manufacturers briefly took the lead in critical dimension scaling in the late 1990s, but have since relinquished it again.

Into the new millennium

As we look forward to the next few decades, consumer gadgets have become one of the largest and most dynamic segments of the electronics industry. These devices have a seemingly unlimited appetite for some kinds of chips, particularly memory. Yet they also depend on a dizzying array of custom circuits implementing each manufacturer’s unique features. According to SEMI president and CEO Stan Myers, consumer devices have made IC demand far less predictable, shifting with the fickle breezes of consumer fads rather than following the steady currents of business capital purchasing cycles. Low product mix fabs, high-volume fabs, and high product mix fabs have increasingly divergent requirements, with differing tradeoffs between throughput and flexibility.

As in previous decades, ICs also enable the tools that manufacturers use to manage fab systems. The 1980s and 1990s brought manufacturing execution systems, software design tools, and software for process monitoring and lithography optimization. Now, according to Aquest president and CEO Mihir Parikh, software models are beginning to capture the full nondeterministic complexity of process flows in a fab, including changes due to equipment downtime and shifting lot priorities. The semiconductor industry has lagged behind other industries in implementing fully automated manufacturing, largely because while the process flow through an automobile factory or a chemical plant is strictly linear, wafer processing is not. A lot of wafers might visit the same equipment cell dozens of times, for successive lithography, deposition, or etch operations. Fab simulation is only now beginning to model realistic scenarios.

As software becomes more capable, fabs will be able to predict wafer flows more accurately, allowing them to balance loads more effectively. The resulting improvements, known collectively as “300 mm Prime,” are expected to help fabs optimize layouts and reduce waiting times and WIP inventory, giving them the flexibility they need to meet ever-shifting market demands. Between now and Solid State Technology’s 60th anniversary, fabs will need both process enhancements and superior manufacturing to remain competitive.

Reference

1. Emerson W. Pugh, Lyle R. Johnson, John H. Palmer, IBM’s 360 and Early 370 Systems, MIT Press, Cambridge, 1991.

Katherine Derbyshire is a contributing editor at Solid State Technology. She founded consulting firm Thin Film Manufacturing, PO Box 82441, Kenmore, WA 98028 US; [email protected], www.thinfilmmfg.com.