Issue



50th Anniversary Perspectives


11/01/2007







On the occasion of Solid State Technology’s 50th anniversary, we asked our Editorial Advisory Board and other luminaries to opine on the past 50 years and future 50 years of semiconductor manufacturing. We asked them a series of questions: What were the key innovations that brought us here? What major trends tell the most interesting story of evolution and revolution? What new innovations will play a significant role in semiconductor manufacturing-both in the near-term and far-off future-and what new devices and technologies might they enable? Their varied answers will surprise you. Julie MacShane, Managing Editor

Leveraging the quantum nature of materials

During the course of the last 50 years, we saw the birth and evolution of the semiconductor industry. This beginning and its subsequent progression was based on our understanding of the atomic structure of materials. Our knowledge was then used, together with ingenious design and manufacturing, to develop the revolutionary electronic and communications products we use in our everyday lives. In parallel, and based on the same understanding, advanced lasers came into being, leveraging the quantum nature of materials. These two parallel developments have complemented each other not only in fundamental technology terms, but also in terms of manufacturing revolution and evolution.

A perfect example of this manufacturing revolution is how laser link processing has enabled significant yield improvements over the last 25 years, beginning with improving yield in memory DRAM chips. This technology has evolved in a manner similar to that of the overall semiconductor industry: It has become more productive over time at a lower cost, which is the basic principle at the heart of Moore’s Law.

Another example of this is device packaging. The semiconductor process is not considered finished until you end up with a fully packaged product. Lasers have played a pivotal role in semiconductor packaging by advancing us into areas where traditional drills came up short. With lasers, we are now able to drill very small microvias on semiconductor IC packages, which was not possible before their conception. Again, this technology evolved consistently with Moore’s Law.

In the near term, I believe lasers will be playing a significant role in new semiconductor applications like annealing, metrology, wafer singulation, as well as denser assemblies of semiconductor chips. Looking further out, transistor functionality will continue to evolve, as transistors continue to be packed even more densely and perform at faster speeds and lower cost. I have no doubt that current materials will be replaced by new ones, as will current structures. Once again, lasers will play a significant role in the fabrication of the semiconductors of tomorrow.

Significant applications will emerge from this ongoing evolution that will possess a great deal of real-time information-processing ability, enabling almost everything we touch, wear, or use to make us more intelligent. The higher the processing bandwidth, the more “intelligent” the device, which will improve productivity for all of us and make us more efficient, smarter, and even more connected. This could encompass anything from wearable computers to real-time self-monitoring devices, to possibly having a chip embedded in the body that boosts a person’s processing power or short/long-term memory, allowing for improvements in the quality of life. Whatever the case, there is no doubt that we are amid a semiconductor revolution. And this is just the beginning.

Click here to enlarge image

Nick Konidaris brings more than 25 years of industry experience to the position. Contact him at ph 503/641-4141.

A mature industry? Hardly.

Over the past 50 years, the semiconductor industry has changed every culture within its reach in the areas of industry, biotechnology, medicine, education, entertainment, transportation, communications, and consumer products. Applications have been continuously added and extended, ultimately driving necessary changes in our technology base; some of these changes have been evolutionary and others disruptive. Yet some people have declared the semiconductor industry “mature.”

Webster’s Dictionary defines “mature” as having “completed natural growth and development;” “reached slow but stable growth;” or for wine, “attained final or desired state.” A brief review of semiconductor technology, markets, and business dynamics may give us some clues as to whether or not our industry has matured.

With a lump of germanium as the first solid-state contact point transistor, our industry started in 1947 at Bell Laboratories. The germanium junction transistor amplifier-a Nobel Prize-winning invention-quickly followed in 1948. Next came a material change with the introduction of the silicon junction transistor by Texas Instruments in 1954. Most significantly, in 1959 the integrated circuit, or chip, was invented independently by two companies: TI (germanium) and Fairchild Semiconductor (silicon). The transistor solved the vacuum tube’s performance, size, handling, and cost problems, and the IC solved the interconnect problems introduced by the ever-increasing number of transistors and passive components designed into a circuit.

Since then, scaling techniques have met the “shrink” demands-increased density and metal levels-of each new IC generation. Designers have pushed manufacturing process, equipment, resist, and wafer suppliers, with little change in other material requirements, and fab managers have pushed to continuously improve cost-effectiveness by making gains in quality, reliability, throughput, fab utilization, productivity, and yield.

Looking forward, it is hoped that process, equipment, and materials that were developed for a given generation might be suitable for the next two of them. In fact, these technologies change incrementally in 18 months, although each step has about seven to 10 years of R&D behind it. Integration of those changes is expected to maintain or improve performance cost ratios needed to launch new applications that will be affordable to the ultimate users. Three recent ongoing innovations-low-k dielectric + CuDD, immersion optics, and high-k dielectric + metal gate-are disruptive by nature and have had, and will have, positive effects such as the continued use of photolithography. None can be optimally used unless they are designed for manufacturing, terabit data handling, and automatic process control.

One major obstacle is that implementing innovative technology requires financial resources that are increasingly beyond the limits of most companies. Photolithography, for example, has been challenged by the likes of e-beam, ion-projection, and x-ray since the late 1960s, but it is still the critically important workhorse of the industry. Maskless litho has yet to emerge. From the 1960s through 2007, it has been the primary wafer exposure technology used sequentially and in tandem for maskmaking, contact printing, reticle generation, and direct wafer exposure. Its evolution has been well chronicled from 436nm wavelength optics, 250nm overlay, 1000nm device critical dimension (CD), and 20wph to 193nm optics, 6nm overlay, 45nm CD, and 130wph. During this time, development costs have increased exponentially, yet the cost of devices has remained essentially constant. With 193nm wavelength optics, 45nm CD logic devices are now in high-volume production. Similar scenarios in etch, ion implant, packaging and test, and materials have all contributed to the industry’s ongoing progress.

R&D models, driven by economic forces, have moved from fiercely guarded competition to cost-sharing partnerships and consortia that jointly develop technologies common to all participants. Business modes likewise are changing. Independent device manufacturers represented virtually all manufacturing capacity before about 1985. Systems houses and fabless chip manufacturers had been left to the wiles of available IDM capacity. Then TSMC introduced the foundry model. Fabless and systems houses proved that theirs is a profitable business. Some IDMs went “fab-lite,” setting an industry-wide trend toward greater use of foundries. Foundry output contributed about 20% of total device revenues in 2006. Outsourcing has given corporations the opportunity to concentrate resources on developing their core business and can drive more profitable growth.

Because of our pervasiveness in served and emerging markets, we are instrumental in the accelerating growth of emerging nations. At this time, perhaps two billion or more people, about one third of the world’s population, have direct access to products that use our technology. Other regions will develop thanks to medical and biotech tools made possible by the semiconductor industry, which can be used to eradicate diseases and famine; bootstrap health, spirit, and stamina; extend lifetimes; and improve quality of life.

From that lump of germanium, we have reached atomic scale device architecture. This application scale is severely testing present CMOS architecture’s functional limits. With its mature judgment, the semiconductor industry will continue to develop materials, processes, sensors, software, diagnostic and control instrumentation, and manufacturing equipment suitable for future devices. Within the next 50 years, the transcendent order may be as follows: CMOS architecture beefed up with high-k dielectric plus metal gate and maybe other disruptive technologies will hold center-stage near-, to-, and perhaps through mid- term, the next 15-20 years. Perhaps we will see quantum gates in the longer term.

The paths taken will be diverse and exciting. Major manufacturing process, equipment, and materials changes are certain. But thus far our core knowledge, innovative selection, and compounding of materials, process and equipment development and integration have built reliable design and manufacturing capabilities for higher density more complex devices at lower cost per bit. From CMOS to quantum gates, the semiconductor industry, now quite agile, will continue to create, harness, and thrive on change, whether evolutionary or disruptive. Are we a mature industry? Hardly!

Click here to enlarge image

Bill Tobey is an SST Editorial Advisory Board member. He has more than 40 years of industry experience. Contact him at [email protected].

An historic view and future challenges

I entered the semiconductor industry when I went to work for AT&T after graduate school in 1967. This was an interesting time, with only a few big corporations in the business like AT&T, Fairchild, RCA, TI, and General Instruments. One-inch and 1.25-in. wafers were common, and 2-in. wafers were on the horizon. Most processing equipment was homemade, however, tool makers like Applied Materials, Siltec, Perkin Elmer, Thermco, and Blue M were beginning to take notice that this industry could indeed become a true market.

Device structures were simple with n- or p-channel MOS and no CMOS or Bi-CMOS; bipolar was used for drivers and amplifiers. Epitaxy was used for bipolar and CMOS, and unique doping sources and low temperature CVD were hot topics of research. Material purities and crystal dislocations were concerns, which brought about creative techniques for gettering, crystal growth, and doping materials. Aluminum was the metal of choice for most chips, but exotic tri-metal stacks with hermetic cap layers such as Si3N4 gained favor.

In the late 1960s and early 1970s, there was an explosion of entrepreneurs. Growth in IDMs, as well as the toolmakers to support the growth, was rampant. Silicon Valley was in its heyday in the early 70s and had every appearance of the “Wild West” or “Gold Rush Days.” During this period, it seemed that the boundaries on everything were being pushed weekly and one needed to constantly innovate with IP to be the “hot start-up.” Devices were shrinking, and Moore’s Law was being validated. Both IDMs and toolmakers were finalizing their strengths and niches, and by the end of 1970s the challenges of submicron linewidths were looming.

The challenge now was to follow Moore’s Law, and the method was simple: design improvements and shrinking the feature size. Material challenges were not high on the problem list yet. Fear of critical dimensions below one micron loomed, and fabs began to specify better than Class 100 cleanrooms with reduced particulates in fluids and gases. New light sources for lithography, and reduced thermal budgets with a host of new materials were on the way, as were higher costs to build a fab with 6-in. wafer tools. You had to put more layers on the chip to interconnect the number of transistors needed to follow Moore’s Law! By the late 1980s and early 1990s, high-frequency devices were needed, too.

We are now in the age of 300mm wafers with a push to the 32nm node. A basic overview from “an aging fab-manager-turned-executive” is that we will see the sunset for silicon devices, as we know them, in the next 15-20 years. At the 32nm node, scaling has already driven the industry into the fundamental limits of silicon. Novel device designs, processes, and materials will be needed to maintain the cost-performance trends of this industry that is now driven by consumer markets.

There are five basic CMOS transistor challenges at the 32nm node: mobility, inversion threshold, source/drain (S/D) resistance, gate length, and parasitic capacitance. The need for increased mobility in both p- and n-channels has resulted in the use of strained silicon, germanium, silicon carbide, different crystalline faces of silicon, and new processing techniques. The need for lower inversion thresholds has introduced high-k gate structures, new metal gates, work function engineering, and variation in the p- and n-channel gate structures, along with spacer and gate isolation techniques. Controlling and reducing the S/D resistance has driven evolution in silicon oxide structures, cluster implants, amorphization and changes in annealing technology.

New device designs such as tri-gate transistors will have unique integration challenges involving diffusion and isolation processes. There isn’t one approach that will work for all device types, so we’ll see both divergence and delays of the incorporation of these elements by DRAM, ASIC, logic, MPU and wireless device manufacturers. High-end products vs. standard products will also demonstrate divergence. To develop novel devices and fab processes with the required breadth and depth, alliances must be developed to ensure that product(s) are developed in time and at the correct cost points.

Specialty materials suppliers may need to find new business models. With the device size reaching three lattice constants-based on silicon-how much material will actually be used in device manufacturing? Will there be enough profit capability from atomic quantities for material suppliers to recoup their R&D time and follow-on manufacturing costs? There are inherent risks in materials development, particularly in an era when “green products” are preferred, so the risks should be shared along with the rewards.

Click here to enlarge image

William Kroll is an SST Editorial Advisory Board member. He has 40 years of experience in the industry. Contact him at ph 908/991-9200.

A future of 3D structures and CNTs

Historically, the semiconductor industry has always been known for its cyclicality, driven by continually evolving and changing market demand. Much of this demand has been sparked by technology revolutions, which created changes that, in turn, drove the demand for conversion technologies. The first wave was the personal computer (PC) revolution. The second wave, the digital consumer revolution, has been in effect for several years and-driven as it is by global consumer buying habits-shows no signs of cresting in the near future. With that said, I believe we will never again see the single product or killer application that matches the significant impact made by such watershed innovations as the digital watch, the calculator, or the PC.

So what have we learned over the past 50 years?

Equipment manufacturers must develop technologies that provide extendibility to meet customers’ roadmap requirements for multiple device generations, while enabling the continuation of Moore’s Law, if they are to remain successful.

To remain competitive, equipment manufacturers must provide customers with tools that incorporate essential conversion technologies. For example, laser processing technology is anticipated to revolutionize front-end processing. Advanced packaging is another market where conversion technology is taking place, e.g., bump processing is replacing wire bonding, while wafer-level chip-scale packaging is growing in popularity to help meet tight form-factor requirements for today’s and tomorrow’s digital devices.

Since Solid State Technology began in 1957, the semiconductor industry has changed dramatically-in the device technology, in significantly reduced transistor costs, and now in the migration toward the Pacific Rim and the growing importance of foundries. It is my belief that those companies offering technology solutions with low cost-of-ownership and global support will continue to reap the greatest benefits, as well as offer the greatest value, in this vital industry.

Near-term expectations

Escalating cost pressures will force several things to happen in the near future. For one, I believe that we will continue to see a consolidation of the semiconductor industry due to the increasing costs associated with building state-of-the-art chipmaking factories. With that, China will play a more important role in the manufacturing process. However, I also believe this will take much longer to materialize than current forecasts indicate. Also, foundries will grow at the fastest rates, employing strategic technical alliances and other strategies to meet their future needs.

Long-term outlook

I believe the technology driving Moore’s Law will continue, and materials limitations will cause a transition from 2D to 3D structures. We currently manufacture transistors in an X and Y 2D plane. Eventually, amorphorous silicon converted to crystalline structure, with lasers forming layers of transistors in the Z plane, creating 3D structures. This concept will enable Moore’s Law to remain viable and actually decrease the time in doubling density.

While silicon may be around for another 20 to 40 years, there is anxiety over what follows. What material comes after silicon? Once atomic sizes are attained, we’ll be out of dimension. Currently, carbon nanotubes top the list of theoretical solutions on the horizon. For such technologies, the integration level will be incredible at the nanoscale. But again, as in the past-conversion or transition-technologies will be required to enable new process geometries. Once again, the beneficiaries in this new industry landscape will be companies that possess a solid global infrastructure and provide advanced technologies that enable precise, economical development of devices with the desired properties and functions.

We have significantly underestimated the future use of semiconductor devices, and one’s imagination can only begin to touch on the impact they will make on the human race going forward. Given the amazing innovations of the last 50 years, I am eager to see today’s imaginings become tomorrow’s realities.

Click here to enlarge image

Contact Arthur W. Zafiropoulo at ph 408/577-3009 or Laura Rebouche, VP of Investor Relations, at [email protected].

Two generations of Beamers reflect on 50 years of semiconductor manufacturing

Since the 50-year scope of SST’s 50-50 vision assignment exceeds the professional lifespan even of dedicated IBMers, the authors chose to pool their experiences to capture past highlights and speculate on the future trends. For the past five decades, the semiconductor industry was propelled forward by the ever increasing integration density of circuit components on a silicon chip, roughly in accordance with Moore’s Law. The circuit density drove down the cost per function, enabling new applications with higher volume and profit margins, allowing in turn the large investments required to improve component density through linear dimensional scaling.

The pre-CMOS solid state revolution

In order for the self-sustaining mechanism of semiconductor scaling to take hold, early silicon technology needed a safe and predictable volume base. This needed volume was provided in late 1968 by the decision to equip IBM’s 370 computer systems with FET monolithic main memories. This landmark decision was made against large prevailing technical odds and against a very well established magnetic core technology, which at the time was becoming heavily entrenched due to major investments in the construction of many new magnetic core memory production facilities in the US and Europe.

The technical base for this strategic decision was laid in one of the small meeting rooms along the windowless aisles of IBM’s T.J.Watson Research Center in Yorktown Heights, NY. At that time, the FET memory development team had produced working test sites supporting the FET circuit and processing groundrule assumptions, and there was a paper design of an FET 512 memory chip. The question to be decided in the meeting was: should this chip be implemented in n-channel or p-channel FET technology? The only solid measurement data were available for p-channel test sites, because all the processing problems related to the integrity of the gate oxide could be handled easier in p-channel. But n-channel had the greater technical promise, because its higher carrier mobility translated into better circuit and memory performance for a given set of processing groundrules and power consumption. Performance was a key decision factor, because only with n-channel would it be possible to achieve the same memory performance as specified for the then plan-of-record core memories.

Carefully weighing the risks and rewards, business, technical, and personal, the team decided to go n-channel. Those in the room were guardedly optimistic that it could be done. Within a few months after this meeting, the team had produced a working monolithic memory card, based on an n-channel 512 bit FET chip. When this card was plugged into an IBM Model 158 computer system, it worked as reliably as the core memory that it replaced. Then and there, IBM made the decision to bet on monolithic main memories, and the silicon micro-electronics volume avalanche took off.

How inconceivable the full impact of this turn of events really was, is reflected in a comment made to one of the authors by an IBM executive present at the decision-making meeting: “Young man, this was a nice presentation. But you did not have to go overboard, promising us future FET memory costs of better than 1 cent/bit.”

Innovation to sustain evolutionary scaling

Almost four decades after the decision that put the self-fueling cycle of semiconductor scaling in motion, an equally memorable meeting took place in the executive conference room of IBM’s Semiconductor Research and Development Center. In the final days of the Power 6 Processor development work, the decision was made to replace alternating phase shifted mask lithography (altPSM) with the more cost-effective sub-resolution assist feature (SRAF) approach. This brought to an end one of the significant drivers in CMOS scaling.

Yielding fully functional microprocessor chips was the closest altPSM had ever come to being used on revenue generating product, and this in itself represented a significant technical achievement culminating in 15 years of development work. This marked the third, and very likely, last technology generation for which altPSM as a perpetual backup solution had been ramped up and then dropped in favor of more manufacturable lithography techniques.

While never used on revenue-generating products at IBM, the inherent complexity of altPSM drove the development of technologies that were instrumental in maintaining the aggressive integration density improvement necessary to keep the IC industry profitable.

Early experiments in manually defining required phase regions in a chip design quickly led to the realization that such efforts would have to be automated for altPSM to be viable. These layout manipulation efforts gave birth to an entirely new application of electronic design automation (EDA) software. While flawless generation of phase shifted layouts remained challenging, spin-off projects such as layout manipulation to compensate for proximity effects and the design of assist features created layout manipulation technology without which scaling beyond 250nm would not have been possible.

The realization that even the most sophisticated altPSM design code could not generate a functional solution for arbitrary layouts forced the interaction between process and design engineers, and laid the foundation for the now hugely popular science of “design for manufacturability” (DFM). Specifically, the complexity of putting phase design solutions into the designer’s hands, while vastly simpler than some contemporary DFM proposals, led to the invention of restricted design rules (RDR), which will be vitally important in scaling beyond 45nm.

Ironically, the fact that all attempts at developing single exposure altPSM techniques failed, while contributing significantly to its uncompetitive cost, made it an excellent development platform for double-exposure techniques that are a key component of patterning strategies for technology nodes beyond 45nm.

By stimulating design-technology integration and giving rise to DFM, altPSM, originally born as a tool to support linear dimensional scaling, has opened the door for future cost/performance improvements of VLSI components, while at the same time capitalizing on much of the processing investment that has already been made.

Future outlook

Past industry trends suggest that speculating too far into the future is futile as ultimately, most predications are proven wrong and most fundamental limits are eventually exceeded. However, the history of solid state technology also provides the comfort of knowing that somewhere an engineering team is taking informed risks to set a new course for technology, while elsewhere engineers are harvesting the insights gained from persistent development, each doing their part in keeping this industry going for another 50 years.

Click here to enlarge image

Wolfgang Liebmann retired from IBM as an assistant group executive, IBM Technology Group, Armonk, NY.

Click here to enlarge image

Lars Liebmann is a member of SST’s Editorial Advisory Board. He is a Distinguished Engineer in IBM’s Semiconductor Research and Development Center. Contact him at [email protected].

The once and future design flow

You can travel the semiconductor industry timeline through the 1960s without ever encountering the term “electronic design automation” (EDA). But by late in the decade, the landscape was already rapidly changing. Gordon Moore’s conjecture was on the path to becoming “Moore’s Law,” with increasing complexity quickly outpacing the option of hand-cutting rubylith masks from paper designs. With the availability of digitizing tablets, keyboards, CRTs, and mainframe computers weighing greater than 500 lbs, but bringing an impressive 128kb of memory to the table, commercial CAD got underway with PCB and IC digitizing, wiring, mapping, etc.

The 1970s saw a move toward automating not just the drafting, but the design as well, introducing place and route technologies. By the early 1980s, computer-aided engineering environments introduced such enabling technologies as schematic capture and SPICE. Possibly the biggest productivity impact of the new EDA industry was the “go digital” movement. Digital design allowed us to model the design universe using simple, safe building blocks, eventually multiplying those robust blocks by more than a billion. The ability to model/simulate merged with the newly developed ability to synthesize and optimize a design, culminating in a time-tested pattern of IC design: “bottom-up modeling” and “top-down design.” This approach worked well as we navigated the challenges in the design roadmap at ≥0.13µm nodes. Along the way, timing closure and test issues, signal integrity, increasing verification demands, and the advent of silicon IP re-use all pushed EDA tools to quickly become complete, correlated, and concurrent.

Then we hit 90nm, and physics started sneaking into the “bottom-up/top-down” pattern. It was no longer guaranteed that a “perfect” design would be possible to manufacture. We needed more accurate modeling of what was happening on the manufacturing floor so that designers could anticipate and reduce the physical effects in the first place. The “bottom-up/top-down” pattern now bridged the hand-off wall, extending down into the manufacturing world. Optical-proximity correction (OPC), for example, was created when fabs started using 193nm wavelengths to create transistors of much smaller dimensions. Today, that same lithography technology is creating 45nm transistors! EDA has made this possible along the way by providing the necessary modeling and then automating the steps in between the designer and fab floor. This is where we are today: at the end of the “easy” part.

Process uncertainty and variability are the new realities at 65 and 45nm. Systematic, not random, defects are now driving the bulk of yield loss mechanisms, and increasingly what we have considered the “modeling reality” is becoming a “statistical reality.” At the cross-over between 65 and 45nm, the push is on for critical-area analysis and variation-aware “clairvoyance” and optimization to be designed in.

As the IC world becomes more statistical, easy gains in speed will continue to be hampered by problems with power dissipation. The solution, in addition to continued advances in low power design, will in part lie in the increased use of multicore devices. If they can’t go faster on their own, you can at least spread out the workload and leverage the effort across several cores working concurrently. We are already seeing the use of 2, 4, 8, and even 16 processors in a single physical package. It is easy to imagine 32, 64, and eventually perhaps thousands of processors being utilized on a single device, whereby the von Neumann model gives way to the multicore model as the new standard.

New processes and materials will greatly increase the challenge in modeling. We’ll need to build on the incredibly good 3D capabilities now in place to simulate both individual transistor characteristics and proximity effects. This understanding will become even more important as new devices appear that capitalize on the vertical nature of the chip. As city planners have known for years, if you can’t build out, build up! The day may come when we talk about “transistor high-rises.”

Of course, as chip complexity increases, more of the components on a chip will attain “standard” status, prompting both the technical and economic pressure for increased design IP re-use. And just as physics has forced a blurring of the lines between design and manufacturing, embedded software use trends indicate that we’ll need to figure out how to merge the two universes of software and hardware development.

Eventually, we’ll see the end of the classic CMOS roadmap. What then? The use of carbon-based nanotubes and organic transistors would force unprecedented industry partnerships between new end-market customers, chip companies, and the community of experts who will supply the modeling and automation capabilities needed to deliver the goods on time and on budget.

Click here to enlarge image

Aart de Geus received the IEEE Robert Noyce Medal in 2007 “for contributions to, and leadership in, the technology and business development of electronic design automation.” Contact him at ph 650/584-5000.

Still sorting out technology origins

The first 50 years of IC technology saw a remarkably rapid convergence of new concepts, sophisticated technologies, defect reduction and control, yield improvement, and reliability understanding and improvement. Process technologies were developed for silicon crystal growth and wafer fabrication, photolithography, thin film deposition, plasma etching, ion implant, diffusion, annealing, planarization, cleaning, and the metrology to control them all. Many careers and companies have been built around turning these technologies into a profitable enterprise.

There were 10 years of prior semiconductor technology history before the launch of Solid State Technology. During this early period, the transistor evolved from a point contact device in bulk germanium to a planar transistor in silicon. At this point, the concept of solid-state technology was powerful enough to warrant the launch of this trade magazine in 1957 by visionary editor Sam Marshall. Working on planar transistors in silicon thus puts the roots of silicon chemical mechanical polishing (CMP) very close to the origins of ICs as we know them today.

But it wasn’t until 1983 that chemical mechanical polishing (CMP) evolved into chemical mechanical planarization (CMP). I must state that ten years ago in the 40th Anniversary Issue of Solid State Technology, in which I had the honor of writing an article called “The Early Days of CMP,” I mistakenly identified a group in the IBM East Fishkill Base Technology organization as the birth parents of CMP. I have since been advised that they were actually the adopting parents. The birth parent was another denizen of IBM East Fishkill from the other side of the facility in the silicon wafer plant.

Klaus D. Beyer is the source of the concept of using polishing technology for planarization in IBM. He had been developing bare silicon wafer polishing and cleaning methods since 1981, and had come to the conclusion that megasonic cleaning was better for removing particles and reducing surface scratches than the brush cleaning that was widely used at the time. This project was diverted, however, and Klaus was asked to assist on a project to replace thermal oxidation with a low-stress oxide fill process for device isolation. This would later evolve into the trench isolation we know today, but before that could happen, the oxides filling the trenches needed to be planarized. No sane person in those days would have suggested taking a pristine silicon device wafer back into the particle-filled environment of a polishing process. The reason that Klaus thought he could do so and still be considered sane was that he had just discovered that particles could be reliably cleaned off the wafer without causing any damage.

Thus, CMP was born, and the project was staffed shortly thereafter in the Base Technology group. The original trench isolation objective was expanded to include planarization of oxide dielectrics over aluminum interconnects. To the outside world, it appeared that shallow trench isolation followed years behind oxide and tungsten planarization, but now we all know that it was STI and Klaus Beyer who were there first.

Or were they? I still have the nagging memory of a conversation I had many years ago with Jerry Zimmer, now CTO of sp3 Diamond Technologies. He told me of a chemical mechanical planarization project at Raytheon that dated back to about 1974. Neither of us could find the reference, so we have another 10 years to find this lost information so I can print another correction in the 60th Anniversary Issue.

Looking forward to the next 50 years of CMP is an exercise better left to those actually still developing device architectures because this is ultimately from where the inventive sparks will come. The decades of pushing more and more development work out of the fabs and into the suppliers has limited utility; it is better suited for improving manufacturing productivity and the supply chain infrastructure for equipment and the materials needed for known processes.

The new device architectures needed to challenge Moore’s Law another generation or two won’t be developed by a single supplier, or even by a vast consortium of suppliers. A device developer will always be needed to pick a direction, develop a concept or two, throw out the ones that aren’t going to work in the real world, and convince other chipmakers and suppliers that this is the way to go.

Click here to enlarge image

Michael A. Fury is the founder of InterCrossIP Management LLC. He was a process development manager in the early days of CMP during his 17 years at IBM. Contact him at [email protected].

Managing transitions in the semiconductor lithography ecosystem

The history of the semiconductor lithography industry is one of deep interdependence among key elements: the lens and energy source that are integrated into tools, the tools themselves, and the mask and resist that must be used with the tool for the lithography process to take place. It is the pace of advance of this ecosystem as a whole, rather than the pace of advance of individual elements that regulates how far and how fast lithography technology, and hence the semiconductor manufacturing sector, advances.

Looking back at the waves of technology transitions from the industry’s beginnings, we can see cases of fast starts (proximity, projection, g-line, i-line), slow starts (contact, DUV 248nm, DUV 193nm), and false-starts (x-ray, e-beam, DUV 157nm). Two key drivers of a new technology’s success help account for these differences. First, how quickly will each of the ecosystem’s elements be ready for commercial use? (For more information, see “Technology interdependence and the evolution of semiconductor lithography” on p. 51 of SST’s November issue.) Second, what pace of performance improvement can be expected from the established ecosystem?

A case in point is the 0.35µm node, which was expected to be served early on by the then-emerging DUV 248nm technology. The emergence of DUV 248nm, however, was held back by challenges in maskmaking and, especially, in developing a viable resist material. Thus, separate from the challenge of tool readiness, the technology’s progress was slowed by the absence of these key complements. At the same time, incremental improvements in lens and resist technologies, as well as the rise of resolution enhancement technologies (RETs), served to increase the performance of the incumbent i-line technology. We observe an even more extreme interplay in the cases of x-ray and DUV 157nm. These two forces, the one delaying the emergence of the new technology, the other extending the viability of the established incumbent, are fundamental determinants of technology substitution, and of economic outcomes.

In considering the industry’s future evolution, it would be wise to examine these ecosystem trends and their implications. DUV 157nm, DUV 193nm immersion, and EUV seem to be continuing a trend toward increasingly difficult challenges not only to tool suppliers, but also to their ecosystem partners. This trend raises a number of implications. Most directly this will confront members of the ecosystem with greater R&D, and hence investment, requirements. There is a real question as to the ability and willingness of actors in different supplier roles-lens, source, mask, resist, metrology, etc.-to make these resource commitments. In the case of maskmaking, for example, it seems that the investment required to develop upcoming generations of maskmaking tools may soon exceed their economic justification, at least from the supplier’s perspective.

No less challenging is the increasing need for coordination among actors in the ecosystem, in terms of both core technology development as well as coordinating the timing of these efforts. The danger for the industry is a transition to a “by all means, after you” approach, in which everyone waits for others to make the first investment. Traditionally, semiconductor manufacturers have played a critical role in managing this coordination, either as independents or through consortia. As the coordination challenges increase, we expect that they will need to commit ever greater resources to incentivize the ecosystem. Whether these incentives will be targeted at the broad sector or at individual champion firms will play a key role in determining the extent of upcoming consolidation in the ecosystem. The questions-who will develop, who will subsidize, and who will profit-will only increase in importance.

What does this mean for the future? The basic structure of the semiconductor lithography ecosystem has remained unchanged since the birth of the industry. As we argue above, maintaining this structure as the industry advances is likely to be an increasing burden for all involved. However, changing this structure may be even harder.

We predict that a fundamental change in ecosystem architecture will occur under one of two conditions: either an existing element of the ecosystem reaches a breaking point (e.g., masks), which will lead to a reconfiguration of the ecosystem (e.g., adoption of maskless lithography), or an alternative approach, comprising an entirely new ecosystem, could emerge. For either of these scenarios to play out, however, the new approach would need to promise and demonstrate to semiconductor manufacturers vastly superior performance in terms of resolution, costs, and scalability. Keeping in mind that these alternative ecosystems will also face coordination challenges, and given expected improvements in the performance of the ‘traditional’ ecosystem, we expect that the traditional ecosystem, increasingly burdened though it may become, is likely to maintain its dominance into the foreseeable future.

Click here to enlarge image

Ron Adner is Akzo-Nobel Fellow of strategic management and associate professor at INSEAD. Contact him at [email protected].

Click here to enlarge image

Rahul Kapoor is a PhD candidate in Strategy at INSEAD.

Driving ICs from single to multilevel to 3D

The past, current, and future evolution of on-chip interconnects provides a fascinating example of how simultaneous innovations in materials, devices, equipment, and design technology provide the basis for continuous improvement in integration of an ever-increasing number of devices on a chip.

When the first planar silicon IC was invented, the choice of metal for interconnects was between gold (Au) and aluminum (Al). Both had excellent conductivity and resistance to corrosion. Aluminum, selected because of its adhesion to silicon oxide, continued as the material of choice for many generations. In the 1970s, when most circuits used bipolar transistors, scaling caused the current density in the wires to increase. This in turn gave rise to field failures due to electromigration, wherein the current caused nonuniform redistribution of Al along the wire, creating opens in the wire. Research at IBM on a wide variety of alloying elements that could improve the electromigration resistance led to the addition of Cu into aluminum.

When bipolar transistors were replaced by MOS transistors, contact spiking and leakage became a problem and were solved by using Al-Si alloys. In the early 1980s, 3µm MOS technology was affected by catastrophic field failures due to voids in narrow Al interconnects caused by mechanical stress. Improvements in the deposition techniques from evaporation to sputtering and from batch systems to single wafer systems dramatically reduced this reliability problem.

With every successive technology node, the increase in the number of transistors required a concomitant increase in the total length of metal needed for interconnection. Feature scaling of metal patterns, enabled by new lithography tools and plasma etch, allowed a single level of metal to continue to meet the requirements. NMOS followed by CMOS technology allowed new logic products such as microprocessors and gate arrays to be introduced. Unlike memories, logic products required more connectivity between component logic cells and drove up the requirements for interconnect resources. Dual-layer metallization became necessary for MOS circuits, but products suffered from low yields because of equipment and process limitations. Problems included nonuniform intermetal dielectric deposition and inadequate planarization of the dielectric layers.

Replacement of furnace-based deposition systems with single-wafer dielectric deposition systems in the mid-1980s helped drive the adoption of multilevel metallization in more products. Higher levels of integration with each technology node continued to drive the need for more layers of metal but this was limited to three levels by the existing planarization techniques. The invention of chemical mechanical planarization (CMP) by IBM and its rapid adoption by the industry allowed the levels of metal to increase dramatically to five or more by the mid-1990s when the first 0.25µm circuits were introduced (see “CMP: Still sorting out technology origins” on p. S19). The EDA industry in turn had to introduce new place-and-route tools to handle the new designs. In addition, many metal-related material problems had to be addressed.

New materials were introduced. Step coverage in contacts and vias were solved using CVD tungsten. Reliability issues such as stress voiding and electromigration were handled with multilayer stacks that sandwiched Al between Ti and TiN layers. The increased complexity of the metallization was handled by a new generation of single-wafer, multichamber systems, such as the Endura PVD system from Applied Materials.

In the design realm, the rapid scaling of metallization brought a new set of challenges for tools. The resistance and capacitance of the metal interconnects now had to be accurately accounted for in the performance of the circuit. In addition, voltage drops in the power supply grid also had to be considered together with signal integrity issues associated with coupling between adjoining metal lines. Design tool improvements were followed by the next generation of process innovations wherein Al was replaced by Cu, and the deposition and etch by dual damascene. Circuits are now approaching 32nm and have 10 layers of metal using these same techniques.

In the next 50 years, we should expect that the trend toward increased integration and performance will continue-albeit at a slower pace. To get around the integration limits set by the active devices occupying only one layer and the metallization occupying multiple layers, 3D integration will stack individual die and connect them by vertical wiring. Communication between individual blocks in a die is limited by delays in long metal lines and hence optical components will be integrated into the 3D chips. As we go beyond 22nm, the shorter wires such as those in the lower levels of metal become only a few atomic layers wide, and we will need self-assembly instead of pattern and etch. For these layers, lithography and etch will be replaced by direct e-beam writing in conjunction with simultaneous deposition.

Click here to enlarge image

Dipankar Pramanik is an SST Editorial Advisory Board member. He has 25 years of experience in the industry. Contact him at [email protected].

A new paradigm in semiconductor technology

The technological revolution affecting all spheres of our lives over the past half-century is largely due to the advent and pervasive adoption of semiconductor technology. The success of semiconductor technology has been due to our choice of semiconductor material, typically silicon, and the ability to form high-quality insulating layers from silicon. The manufacturing methods we have used to exploit this ability are fundamental to the rapid progress our industry has witnessed over the past few decades.

Silicon dioxide, the most important dielectric-called “nature’s gift to mankind” because of the high-quality interface it forms with silicon-has long been grown thermally or deposited from various precursors by thermal means or with plasma assistance. Often an analogous insulator, silicon nitride, which is made by similar techniques, can be used with silicon dioxide for either electrically active or passive functions in both logic and memory devices.

These insulators are used as mission-critical layers such as gate dielectrics that control the high-speed switching of transistors, and as capacitor dielectrics in many types of devices that store information. They are also used to isolate transistors from each other, from various components of the transistor itself, and from metal lines that convey information between the transistors. Besides these functional applications, these materials are extensively applied as masking layers during device fabrication to selectively define diverse components of the transistors and memory devices, taking advantage of their mutually selective properties.

To meet ever-increasing performance demands, these dielectrics have been scaled aggressively from ~90 atomic layer thick silicon dioxide gate dielectric films in 1985, down to four atomic layer thick films today. Over the past few decades, novel manufacturing processes and tools with better uniformity, electrical properties, and lower costs have been engineered to grow and deposit these films, helping to accelerate increased chip functionality while driving down costs. Indeed, silicon-based dielectric films have been quite versatile and tenacious, and have become ubiquitous and indispensable.

The unrelenting need to scale the electrical thickness of the gate and capacitor dielectrics has ultimately led to the recent introduction of non-silicon-based high-k dielectrics. This change, especially in gate dielectrics, is significant for a couple of reasons. First, metal-oxide-based high dielectric constant materials of sufficiently good quality have been introduced for the first time. As a reference, metals in the transistor channel can cause significant problems by affecting mobility and generating trap sites in the dielectric.

Second, a high-quality gate dielectric layer has been deposited rather than thermally grown, as has been the case for silicon dioxide for over three decades. This requires controlling not only the stoichiometry of the deposited dielectric, but also the uniformity down to the atomic layers. Such stringent requirements have led to the development and introduction of manufacturable atomic layer deposition technologies. It is interesting to note that the high-k dielectric layer still requires an ultrathin silicon dioxide-based interface to retain desirable performance characteristics. However, the introduction of high-k dielectrics for gate and memory dielectrics is the first instance of integrating functional materials with established silicon technologies to enhance performance.

The future holds tremendous promise for such heterogeneous integration of unique and novel materials with known and established device technologies. Future technology scaling will not so much focus on reducing dimensions alone, but will focus on increasing the functionality derived from adding hitherto incompatible materials. In the near term, CMOS and memory scaling will continue to use such options more aggressively. In the longer term, combinations of emerging nanotechnologies with CMOS and memory are likely.

Opportunities for further system-level functional improvements may be found by integrating combinations of technologies as needed: optical, optoelectronic, spin‑based devices, RF and analog devices, dense multilayer nonvolatile memories, non-charge-based memories, micro-electromechanical devices, sensors, and ultra-low-power devices. The clear challenge will be to invent cost-efficient new technologies that will allow such heterogeneous integration with atomic level control but without sacrificing desirable properties. Opportunities are just opening up to fabricate highly integrated circuits with functionally diverse elements, combining today’s discrete devices to seamlessly operate single chips with small form factors. The future is rife with exciting possibilities in the area of hybrid integration of novel functional materials.

Click here to enlarge image

Raj Jammy is an SST Editorial Advisory Board member and, during his career, managed IBM programs in high-k gate dielectrics and metal gates. Contact him at [email protected] or ph 512/356-3098.

Revolutionary and evolutionary changes

Since beginning in the industry in 1978 at IBM, where the bipolar transistor was king in the front-end-of-line (FEOL), I have seen many changes take place, most of which were evolutionary, but some that were indeed revolutionary. Examples of the former include the switch to tungsten studs at the contact level to solve line opens associated with steep taper vias and, later, sputtered Al/Cu metal/metal RIE at the line level with tungsten studs as connecting vias on all levels. This allowed significant densification as the vertical sidewalls of tungsten vias took up a minimum of real estate and reliability increased markedly. Following this, power concerns led to a switch to CMOS devices almost overnight.

Major driving factors in the evolution of CMOS were the Roadmap and Moore’s Law, which, through competitive forces, caused IBM and others to shrink or scale by 35% or so, and thus create a node change, every two years. Fortunately, this not only increased the density by a factor of two, but also increased the speed due to shrinkage of gate length and gate oxide at the same rate as the wiring. In the course of these bi-yearly improvements, important process changes were often undertaken that increased device speed and reliability. Examples include titanium silicide to lower RC time constant and contact resistance, and self-aligned source/drain, which, in combination with the invention of the spacer, permitted formation of the lower doped drain to combat hot electrons. Currently, this approach has transformed itself to the source drain extension (SDE) to reduce Ioff to a tolerable level. Without these evolutionary changes, there would not be any practical devices at current device sizes.

Regarding the front-end-of-line, it is worth adding that if not for resist trim, which allows gates to be made with 32nm gate lengths in a 65nm node with lithography that struggles at 50nm, we would not have the speeds we have today. Add strain to enhance mobility-in particular the growth of SiGe pockets in the pFET to locally stress the region laterally under the gate-and one is caught up more or less to the 45-65nm node major achievements. SiGe selective growth is perhaps an example of bottom-up self assembly with the SiGe only growing in the pockets adjacent to the pFET. This was a very surprising change that made it into mainstream devices, as first introduced by Intel. The opposite of trim, shrinkage of holes can be used to achieve structures with finer dimension than litho capabilities in the future.

In the near term, we will enter the era of significant material changes in the FEOL, specifically the introduction of metal gates and high-k gate oxide. This may well be achieved through quite different approaches. For example, IBM will almost certainly do gate-first, while Intel may use a gate-last by removing the sacrificial gate and replacing it with a new and more compatible, different metal gate in the pFET than the nFET. These contrasting methods are an indication of the major divergences that will occur in process builds in the near future. Other changes that will likely be implemented are SiC in the nFET and even stressed shallow-trench isolation to keep us on the increasing mobility path to speed. It appears, however, that stress enhancement techniques will get more and more difficult as scaling proceeds.

The use of double patterning, first perhaps with memory products, is likely to occur soon with the delay of EUV. DRAM deep trench will go vertical and thus save real estate to continue on the path of DRAM density increases. Logic will keep thinning the SOI and perhaps will reach the point of fully depleted devices, in which one can control Ioff effectively in this manner prior to turning to 3D devices such as FinFETs. Nevertheless, the latter will likely come within five years. Planar devices may in the interim use Ge and GaAs thin layer inserts in order to enhance mobility still further by using thin film transistor processes combined with epi to produce the layered structures under the gate.

Other important changes that will shortly occur include the introduction of new memory, perhaps based on phase change taking over the nonvolatile memory market, with some type of this memory on all laptops instead of hard drives. This will dramatically improve battery life and boot-up time and will be on all computers shortly. Of course, there are 3D interconnects with two or even three layers of different types of devices with through-silicon vias connecting the chips. This technology will appear on memory chips as well as photonic and MEMS configurations and will enable unprecedented functionality in small packages.

What about beyond 20 years? Likely replacement for our current electrostatic switch will be magnetic devices that manipulate spin. This should alleviate the Ioff problem and allow faster switching. BEOL (back-end-of-line) distribution methods with copper, albeit with more levels, may do some of the job. Photons may be used to some degree, but alternative wires will still be required, It is not yet clear if carbon nanotubes can be used as they must overcome the hurdles of contact resistance and effective methods of formation.

Click here to enlarge image

Ernest Levine is an SST Editorial Advisory Board member and is well known for giving an IC fabrication course. Contact him at elevine@ uamail.albany.edu.

From curiosity to major engine of the global economy

In five short decades, semiconductor manufacturing has moved from being the variable work of artisans to one of the major engines of the global economy. Within this relatively compressed period, there have been several major discontinuities in both the design and materials of semiconductors, each championed and dominated by a different group of players, all in the pursuit of reduced cost per logic or memory function. After moving in fairly rapid succession from vacuum tubes to transistors-first captured in germanium and then settling on silicon-it was the advent of bipolar IC technology that kicked off the first major growth period in the semiconductor industry with Texas Instruments, Motorola, and Fairchild Semiconductor in the ascendance.

Then the successful transition to MOS technology by Intel, Texas Instruments, MOSTEK, and Motorola opened up whole new application horizons with affordable semiconductor-based memories that enabled mass market products such as handheld calculators. In a matter of years, microprocessors came to the fore with Intel and Motorola dominating the market, ushering in the PC era that transformed both the office and the home. That was followed in rapid succession by special purpose microprocessors-the most important being the DSP from Texas Instruments-and the subsequent emergence of customizable ICs.

While the industry was experiencing all these technical advancements, it was also undergoing a major evolution in specialization, seeking ever-greater economies of scale, with total vertical integration dominating the semiconductor industry.

The emergence of the customizable IC, concurrently with the electronic design automation (EDA) industry in the early 1980s and the availability of relatively affordable workstations, accelerated the ‘democratization’ of semiconductor development, supporting a far larger community of designers than was ever thought possible. The fabless/foundry model completed this evolution from a vertically-structured industry to a vibrant horizontal market capable of supporting hundreds of specialized companies. AT&T’s licensing of the transistor patents in 1947 essentially suspended the patent system for electronics for 35 years, forcing everyone to cross-license and then stand on each others’ shoulders, thus making incredibly rapid innovation possible.

What the past 50 years taught us is we always find a way to push existing technology farther than anyone would believe. For example, optical lithography has moved far below wavelengths thought possible, thanks to resolution enhancement technology (RET). And despite all the talk about e-beam direct write, EUV, etc., 193nm immersion will take the industry to at least 11nm and maybe further. In fact, lithography may not be the reason for eventual change in technology. It most likely will be power.

Certainly the 50-year horizon means a new switch-whether bionic, nano, optical, or maybe several-as the industry continues its inexorable drive for lower cost per function. It also means new memory structures based upon new materials. But for now, we are nowhere near the atomic/electron limits. In fact, we are at least 1000× ahead of that limit. Without question, charge leakage makes resistive phase change approaches better than charge storage, but we will push charge storage approaches beyond all reasonable limits. Storage requirements for video alone mandate at least 5-6 orders of magnitude improvement in cost per bit.

The really fascinating development for the next 50 years will be the rapid expansion of the breadth and depth of technology into every aspect of our lives, fueling continued growth in the market. In the late 1960s, only about 5000 designers (mostly employed by large semiconductor and systems companies) could create new IC functions. ASIC capability and EDA tools broadened this set of people to roughly 50,000 designers in the 1980s. Extending this linear plot of the logarithmic growth in designers to the current decade, we have approximately 500,000 “designers” capable of customizing ICs by using field programmable logic.

So what happens in the near-term future-say 2015? If trends continue, we will find ways to simplify the customization of ICs to the level of programming your digital video recorder. That is what is required if the extrapolated five million “designers” are going to enter the fray by 2020. That in turn will require innovative compilers, ways to utilize multi-core processors on a single chip, and the reusability of IP.

These seem like plenty of exciting challenges to me. So here’s to the next 50 years!

Click here to enlarge image

Walden C. Rhines, chairman and CEO of Mentor Graphics, can be contacted via Sonia Harrison at e-mail [email protected].

Worldwide and pervasive expansion of semiconductors

The electronics and semiconductor industries have good reason to be proud of their accomplishments over the past 50 years because of the many ways they have enriched our quality of life. The most important accomplishments for the next 50 years will also be measured by the direct impact these industries have on the everyday lives of people around the world.

The impact of electronics has been so pervasive that we often take it for granted. This is true, at least in the Western world-where from the time we wake up in the morning to our radio/CD alarm clocks, talking bathroom scales, electronic shavers, and toothbrushes, to the time we end our long busy day, when we can slump back in our beds with electric blankets and sleep numbers-we are inundated with electronic products enabled by the semiconductor industry. Examples include microwave ovens, talking refrigerators, electronic ignitions, electronically controlled fuel injection and navigation systems in our cars, mobile telephones and computers, the Internet, teleconferencing systems, cable television with digital video recording, and satellite digital media players that can access radio and television content from most places on the globe, to name a few. We may wonder if all of these items really enrich our lives, but this doubt disappears when we are deprived of any of these luxuries.

If we were to try to compile a “Top 4” list of the advances in the past 50 years that have most affected our daily lives, it would probably include:

4. Solid-state memory. This includes DRAMs, flash, EPROMs, and EEPROMs. The availability of cheap memory expanding capacity along Moore’s Law has resulted in advancements in digital imaging, personal music players, PCs, and almost every other consumer electronic device available today.

3. The personal computer. Without the PC, the advancements in the Internet would have been unimaginable. The PC also brought the power of desktop publishing, data analysis, and basic office communications that are central to the operation of most businesses worldwide.

2. The cellular/mobile phone. The cell phone has gained acceptance virtually everywhere. The smaller infrastructure requirements have allowed for the explosive demand across the world to be met, even in the poorest of countries.

1. The Internet. Probably the single most important advancement in the last 100 years, the Internet has revolutionized business, communication, and connectedness globally, and has rendered national borders meaningless in terms of information flow. Moreover, the revolution is not yet over as underdeveloped parts of the world are still beginning to adopt this technology.

Looking forward to the next 50 years, the opportunities for the semiconductor industry are even more dramatic. As computing power increases and the technology becomes adept at manipulating atomic level structures, it is very likely that some of the major issues facing our world can become addressable.

The biggest opportunities for our industry include:

3. Energy alternatives. There are two main opportunities in the category of helping to reduce greenhouse gases and increase the availability of energy. One is to make renewable energy sources more practical. Besides the big push today toward solar energy, there are also opportunities to use electronics technology to make the conversion of energy from wind, and other alternative sources to electricity, more practical. The second is to develop advances in storage technology to allow for storage of more charge per unit volume with minimal leakage. Advances in storage technology will be the big enablers for electric cars.

2. Artificial intelligence. Despite an explosive growth in computing over the past 50 years, we are not much closer to unlocking the mystery of cognition and how to create truly intelligent systems. Advances in understanding how intelligence works may be the key to transforming simple speech recognition into language recognition. The impact of this would be huge, making it possible to have real language interpretation, machines that communicate with us in our own language, etc. Intelligent systems could also open up many more applications of computing power.

1. Biotechnology. It is not hard to imagine how advances in the silicon industry that aim to pattern structures at smaller and smaller dimensions can be applied to manipulate and possibly manufacture biological macromolecules, such as DNA/RNA and proteins. Over the past 50 years, this industry advanced from the initial Watson-Crick model of DNA and understanding the basic machinery of cells in the process of protein manufacture, to unlocking the human genome and finding linkages between genetic defects and common diseases. Going forward, the ability to manipulate the machinery in our cells to fight diseases more effectively and to repair worn out systems is a huge opportunity. This may not happen in five years or 10 years, but given 50 years, it becomes highly feasible.

Click here to enlarge image

Jack Uppal has 27 years of experience in the industry. Contact him at [email protected].

Here today, gone tomorrow?

I’ve been in the semiconductor packaging industry for only about 40% of its history, but even in that span of time I have witnessed profound changes. The first change that comes to mind is the people. Even in the 1980s, there were many people without technical training working in the “Package Engineering” group. A reasonably alert and organized person could handle most of the coordination and “check-list” functions required for selecting and implementing a package for a particular semiconductor product. Except at the very high end, there was little thought of package enhancements, custom features, electrical parameter optimization, system-level constraints, or any other factors that must be taken into account routinely today.

I was fortunate to be picked to start up the package modeling effort at National Semiconductor, and I discovered that the previous owner of such data didn’t really know what it all meant. In contrast, in 2007, I coordinated a large proposal for a packaging technology program, and when putting together the biographies of the people that would execute the program, I realized that everyone involved-everyone-had a PhD. So, in 20 years, the packaging industry has gone from employing people without formal engineering training to hiring those with PhDs.

So, where do you extrapolate that trend? What is ahead for packaging? I think it’s safe to say that packaging as we know it won’t exist by some point between now and 2057. Even today, the term “package” has become misleading. QFNs, chip-scale packages, and wafer-level packaging in particular merely surround a chip with a thin enclosure, rather than embedding it in an oversized container. In fact, I think of it as more like an envelope than a package, although it’s hard to picture the terminology fully changing from “packaging and test” to “enveloping and test.” (Perhaps the term “packaging” will hang on longer than the reality, like “dialing” a phone.) Eventually, packaging will truly be just the last step in the wafer process. The last major wafer processes now, forming the interconnect, do not constitute a separate industry, and there’s no reason to believe that packaging won’t be similarly integrated, both functionally and conceptually.

Packaging engineers should not despair about their future, though. The multidisciplinary nature of the field should ensure that astute packaging people can make room for themselves in the future semiconductor industry. Most pundits envision electronics becoming ubiquitous, not just in your car, your house, and your ear, but also in your clothing and inside your body. We should be prepared to picture any surface as an electronics platform. There are already prototypes of displays embedded in clothing-advertising opportunities might be driving the next electronics revolution-and there are interesting announcements all the time about further integration of electronics with living tissue.

All of these trends point to profound challenges in materials, mechanical engineering, power distribution, co-design, and thermal management, all areas in which packaging engineers are leaders. I see a role for the packaging industry to leverage its broad base and ingenuity, update itself, and drive the upcoming revolution in electronics.

Click here to enlarge image

Jeffrey Demmin is an SST Editorial Advisory Board member. He has held a succession of engineering posts at National Semiconductor, nCHIP, Seagate, and Textron Systems. Contact him at [email protected].