Leading industry experts provide their perspectives on what to expect in 2015. 3D devices and 3D integration, rising process complexity and “big data” are among the hot topics.
Entering the 3D era
Steve Ghanayem, vice president, general manager, Transistor and Interconnect Group, Applied Materials
This year, the semiconductor industry celebrates the 50th anniversary of Moore’s Law. We are at the onset of the 3D era. We expect to see broad adoption of 3D FinFETs in logic and foundry. Investments in 3D NAND manufacturing are expanding as this technology takes hold. This historic 3D transformation impacting both logic and memory devices underscores the aggressive pace of technology innovation in the age of mobility. The benefits of going 3D — lower power consumption, increased processing performance, denser storage capacity and smaller form factors — are essential for the industry to enable new mobility, connectivity and Internet of Things applications.
The semiconductor equipment industry plays a major role in enabling this 3D transformation through new materials, capabilities and processes. Fabricating leading-edge 3D FinFET and NAND devices adds complexity in chip manufacturing that has soared with each node transition. The 3D structure poses unique challenges for deposition, etch, planarization, materials modification and selective processes to create a yielding device, requiring significant innovations in critical dimension control, structural integrity and interface preparation. As chips get smaller and more complex, variations accumulate while process tolerances shrink, eroding performance and yields. Chipmakers need cost-effective solutions to rapidly ramp device yield to maintain the cadence of Moore’s Law. Given these challenges, 2015 will be the year when precision materials engineering technologies are put to the test to demonstrate high-volume manufacturing capabilities for 3D devices.
Achieving excellent device performance and yield for 3D devices demands equipment engineering expertise leveraging decades of knowledge to deliver the optimal system architecture with wide process window. Process technology innovation and new materials with atomic-scale precision are vital for transistor, interconnect and patterning applications. For instance, transistor fabrication requires precise control of fin width, limiting variation from etching to lithography. Contact formation requires precision metal film deposition and atomic-level interface control, critical to lowering contact resistance. In interconnect, new materials such as cobalt are needed to improve gap fill and reliability of narrow lines as density increases with each technology node. Looking forward, these precision materials engineering technologies will be the foundation for continued materials-enabled scaling for many years to come.
Increasing process complexity and opportunities for innovation
Brian Trafas, Chief Marketing Officer, KLA-Tencor Corporation
The 2014 calendar year started with promise and optimism for the semiconductor industry, and it concluded with similar sentiments. While the concern of financial risk and industry consolidation interjects itself at times to overshadow the industry, there is much to be positive about as we arrive in the new year. From increases in equipment spending and revenue in the materials market, to record level silicon wafer shipments projections, 2015 forecasts all point in the right direction. Industry players are also doing their part to address new challenges, creating strategies to overcome complexities associated with innovative techniques, such as multipatterning and 3D architectures.
The semiconductor industry continues to explore new technologies, including 3DIC, TSV, and FinFETs, which carry challenges that also happen to represent opportunities. First, for memory as well as foundry logic, the need for multipatterning to extend lithography is a key focus. We’re seeing some of the value of a traditional lithography tool shifting into some of the non-litho processing steps. As such, customers need to monitor litho and non-litho sources of error and critical defects to be able to yield successfully at next generation nodes. To enable successful yields with decreasing patterning process windows, it is essential to address all sources of error to provide feed forward and feed backward correctly.
The transition from 2D to 3D in memory and logic is another focus area. 3D leads to tighter process margins because of the added steps and complexity. Addressing specific yield issues associated with 3D is a great opportunity for companies that can provide value in addressing the challenges customers are facing with these unique architectures.
The wearable, intelligent mobile and IoT markets are continuing to grow rapidly and bring new opportunities. We expect the IoT will drive higher levels of semiconductor content and contribute to future growth in the industry. The demand for these types of devices will add to the entire value chain including semiconductor devices but also software and services. The semiconductor content in these devices can provide growth opportunities for microcontrollers and embedded processors as well sensing semiconductor devices.
Critical to our industry’s success is tight collaboration among peers and with customers. With such complexity to the market and IC technology, it is very important to work together to understand challenges and identify where there are opportunities to provide value to customers, ultimately helping them to make the right investments and meet their ramps.
Controlling manufacturing variability key to success at 10nm
Richard Gottscho, Ph.D., Executive Vice President, Global Products, Lam Research Corporation
This year, the semiconductor industry should see the emergence of chip-making at the 10nm technology node. When building devices with geometries this small, controlling manufacturing process variability is essential and most challenging since variation tolerance scales with device dimensions.
Controlling variability has always been important for improving yield and device performance. With every advance in technology and change in design rule, tighter process controls are needed to achieve these benefits. At the 22/20nm technology node, for instance, variation tolerance for CDs (critical dimensions) can be as small as one nanometer, or about 14 atomic layers; for the 10nm node, it can be less than 0.5nm, or just 3 – 4 atomic layers. Innovations that drive continuous scaling to sub-20nm nodes, such as 3D FinFET devices and double/quadruple patterning schemes, add to the challenge of reducing variability. For example, multiple patterning processes require more stringent control of each step because additional process steps are needed to create the initial mask: more steps mean more variability overall. Multiple patterning puts greater constraints not only on lithography, but also on deposition and etching.
Three types of process variation must be addressed: within each die or integrated circuit at an atomic level, from die to die (across the wafer), and from wafer to wafer (within a lot, lot to lot, chamber to chamber, and fab to fab). At the device level, controlling CD variation to within a few atoms will increasingly require the application of technologies such as atomic layer deposition (ALD) and atomic layer etching (ALE). Historically, some of these processes were deemed too slow for commercial production. Fortunately, we now have cost-effective solutions, and they are finding their way into volume manufacturing.
To complement these capabilities, advanced process control (APC) will be incorporated into systems to tune chemical and electrical gradients across the wafer, further reducing die-to-die variation. In addition, chamber matching has never been more important. Big data analytics and subsystem diagnostics are being developed and deployed to ensure that every system in a fab produces wafers with the same process results to atomic precision.
Looking ahead, we expect these new capabilities for advanced variability control to move into production environments sometime this year, enabling 10nm-node device fabrication.
2015: The year 3D-IC integration finally comes of age
Paul Lindner, Executive Technology Director, EV Group
2015 will mark an important turning point in the course of 3D-IC technology adoption, as the semiconductor industry moves 3D-IC fully out of development and prototyping stages onto the production floor. In several applications, this transition is already taking place. To date, at least a dozen components in a typical smart phone employing 3D-IC manufacturing technologies. While the application processor and memory in these smart devices continue to be stacked at a package level (POP), many other device components—including image sensors, MEMS, RF front end and filter devices—are now realizing the promise of 3D-IC, namely reduced form factor, increased performance and most importantly reduced manufacturing cost.
The increasing adoption of wearable mobile consumer products will also accelerate the need for higher density integration and reduced form factor, particularly with respect to MEMS devices. More functionality will be integrated both within the same device as well as within one package via 3D stacking. Nine-axis international measurement units (IMUs, which comprise three accelerometers, three gyroscopes and three magnetic axes) will see reductions in size, cost, power consumption and ease of integration.
On the other side of the data stream at data centers, expect to see new developments around 3D-IC technology coming to market in 2015 as well. Compound semiconductors integrated with photonics and CMOS will trigger the replacement of copper wiring with optical fibers to drive down power consumption and electricity costs, thanks to 3D stacking technologies. The recent introduction of stacked DRAM with high-performance microprocessors, such as Intel’s Knights Landing processor, already demonstrate how 3D-IC technology is finally delivering on its promises across many different applications.
Across these various applications that are integrating stacked 3D-IC architectures, wafer bonding will play a key role. This is true for 3D-ICs integrating through silicon vias (TSVs), where temporary bonding in the manufacturing flow or permanent bonding at the wafer-level is essential. It’s the case for reducing power consumption in wearable products integrating MEMS devices, where encapsulating higher vacuum levels will enable low-power operation of gyroscopes. Finally, wafer-level hybrid fusion bonding—a technology that permanently connects wafers both mechanically and electrically in a single process step and supports the development of thinner devices by eliminating adhesive thickness and the need for bumps and pillars—is one of the promising new processes that we expect to see utilized in device manufacturing starting in 2015.
2015: Curvilinear Shapes Are Coming
For the semiconductor industry, 2015 will be the start of one of the most interesting periods in the history of Moore’s Law. For the first time in two decades, the fundamental machine architecture of the mask writer is going to change over the next few years—from Variable Shaped Beam (VSB) to multi-beam. Multi-beam mask writing is likely the final frontier—the technology that will take us to the end of the Moore’s Law era. The write times associated with multi-beam writers are constant regardless of the complexity of the mask patterns, and this changes everything. It will open up a new world of opportunities for complex mask making that make trade-offs between design rules, mask/wafer yields and mask write-times a thing of the past. The upstream effects of this may yet be underappreciated.
While high-volume production of multi-beam mask writing machines may not arrive in time for the 10nm node, the industry is expressing little doubt of its arrival by the 7nm node. Since transitions of this magnitude take several years to successfully permeate through the ecosystem, 2015 is the right time to start preparing for the impact of this change. Multi-beam mask writing enables the creation of very complex mask shapes (even ideal curvilinear shapes). When used in conjunction with optical proximity correction (OPC), inverse lithography technology (ILT) and pixelated masks, this enables more precise wafer writing with improved process margin. Improving process margin on both the mask and wafer will allow design rules to be tighter, which will re-activate the transistor-density benefit of Moore’s Law.
The prospect of multi-beam mask writing makes it clear that OPC needs to yield better wafer quality by taking advantage of complex mask shapes. This clear direction for the future and the need for more process margin and overlay accuracy at the 10nm node aligns to require complex mask shapes at 10nm. Technologies such as model-based mask data preparation (MB-MDP) will take center stage in 2015 as a bridge to 10nm using VSB mask writing.
Whether for VSB mask writing or for multi-beam mask writing, the shapes we need to write on masks are increasingly complex, increasingly curvilinear, and smaller in minimum width and space. The overwhelming trend in mask data preparation is the shift from deterministic, rule-based, geometric, context-independent, shape-modulated, rectangular processing to statistical, simulation-based, context-dependent, dose- and shape-modulated, any-shape processing. We will all be witnesses to the start of this fundamental change as 2015 unfolds. It will be a very exciting time indeed.
Data integration and advanced packaging driving growth in 2015
Mike Plisinski, Chief Operating Officer, Rudolph Technologies, Inc.
We see two important trends that we expect to have major impact in 2015. The first is a continuing investment in developing and implementing 3D integration and advanced packaging processes, driven not only by the demand for more power and functionality in smaller volumes, but also by the dramatic escalation in the number and density I/O lines per die. This includes not only through silicon vias, but also copper pillar bumps, fan-out packaging, hyper-efficient panel-based packaging processes that use dedicated lithography system on rectangular substrates. As the back end adopts and adapts processes from the front end, the lines that have traditionally separated these areas are blurring. Advanced packaging processes require significantly more inspection and control than conventional packaging and this trend is still only in its early stages.
The other trend has a broader impact on the market as a whole. As consumer electronics becomes a more predominant driver of our industry, manufacturers are under increasing pressure to ramp new products faster and at higher volumes than ever before. Winning or losing an order from a mega cell phone manufacturer can make or break a year, and those orders are being won based on technology and quality, not only price as in the past. This is forcing manufacturers to look for more comprehensive solutions to their process challenges. Instead of buying a tool that meets certain criteria of their established infrastructure, then getting IT to connect it and interpret the data and write the charts and reports for the process engineers so they can use the tool, manufacturers are now pushing much of this onto their vendors, saying, “We want you to provide a working tool that’s going to meet these specs right away and provide us the information we need to adjust and control our process going forward.” They want information, not just data.
Rudolph has made, and will continue to make, major investments in the development of automated analytics for process data. Now more than ever, when our customer buys a system from us, whatever its application – lithography, metrology, inspection or something new, they also want to correlate the data it generates with data from other tools across the process in order to provide more information about process adjustments. We expect these same customer demands to drive a new wave of collaboration among vendors, and we welcome the opportunity to work together to provide more comprehensive solutions for the benefit of our mutual customers.
Process Data – From Famine to Feast
Jack Hager, Product Marketing Manager, FEI
As shrinking device sizes have forced manufacturers to move from SEM to TEM for analysis and measurement of critical features, process and integration engineers have often found themselves having to make critical decisions using meagre rations of process data. Recent advances in automated TEM sample preparation, using FIBs to prepare high quality, ultra-thin site-specific samples, have opened the tap on the flow of data. Engineers can now make statistically-sound decisions in an environment of abundant data. The availability of fast, high-quality TEM data has whet their appetites for even more data, and the resulting demand is drawing sample preparation systems, and in some cases, TEMs, out of remote laboratories and onto the fab floor or in a “near-line” location. With the high degree of automation of both the sample preparation and TEM, the process engineers, who ultimately consume the data, can now own and operate the systems that generate this data, thus having control over the amount of data created.
The proliferation of exotic materials and new 3D architectures at the most advanced nodes has dramatically increased the need for fast, accurate process data. The days when performance improvements required no more than a relatively simple “shrink” of basically 2D designs using well-understood processes are long gone. Complex, new processes require additional monitoring to aide in process control and failure analysis troubleshooting. Defects, both electrical and physical, are not only more numerous, but typically smaller and more varied. These defects are often buried below the exposed surface which limits traditional inline defect-monitoring equipment effectiveness. This has resulted in renewed challenges in diagnosing their root causes. TEM analysis now plays a more prevalent role providing defect insights that allow actionable process changes.
While process technologies have changed radically, market fundamentals have not. First to market still commands premium prices and builds market share. And time to market is determined largely by the speed with which new manufacturing processes can be developed and ramped to high yields at high volumes. It is in these critical phases of development and ramp that the speed and accuracy of automated sample preparation and TEM analysis is proving most valuable. The methodology has already been adopted by leading manufacturers across the industry – logic and memory, IDM and foundry. We expect the adoption to continue, and with it, the migration of sample preparation and advanced measurement and analytical systems into the fab.
Diversification of processes, materials will drive integration and customization in sub-fab
Kate Wilson, Global Applications Director, Edwards
We expect the proliferation of new processes, materials and architectures at the most advanced nodes to drive significant changes in the sub fab where we live. In particular, we expect to see a continuing move toward the integration of vacuum pumping and abatement functions, with custom tuning to optimize performance for the increasingly diverse array of applications becoming a requirement. There is an increased requirement for additional features around the core units such as thermal management, heated N2 injection, and precursor treatment pre- and post-pump that also need to be managed.
Integration offers clear advantages, not only in cost savings but also in safety, speed of installation, smaller footprint, consistent implementation of correct components, optimized set-ups and controlled ownership of the process effluents until they are abated reliably to safe levels. The benefits are not always immediately apparent. Just as effective integration is much more than simply adding a pump to an abatement system, the initial cost of an integrated system is more than the cost of the individual components. The cost benefits in a properly integrated system accrue primarily from increased efficiencies and reliability over the life of the system, and the magnitude of the benefit depends on the complexity of the process. In harsh applications, including deposition processes such as CVD, Epi and ALD, integrated systems provide significant improvements in uptime, service intervals and product lifetimes as well as significant safety benefits.
The trend toward increasing process customization impacts the move toward integration through its requirement that the integrator have detailed knowledge of the process and its by-products. Each manufacturer may use a slightly different recipe and a small change in materials or concentrations can have a large effect on pumping and abatement performance. This variability must be addressed not only in the design of the integrated system but also in tuning its operation during initial commissioning and throughout its lifetime to achieve optimal performance. Successful realization of the benefits of integration will rely heavily on continuing support based on broad application knowledge and experience.
Giga-scale challenges will dominate 2015
Dr. Zhihong Liu, Executive Chairman, ProPlus Design Solutions, Inc.
It wasn’t all that long ago when nano-scale was the term the semiconductor industry used to describe small transistor sizes to indicate technological advancement. Today, with Moore’s Law slowing down at sub-28nm, the term more often heard is giga-scale due to a leap forward in complexity challenges caused in large measure by the massive amounts of big data now part of all chip design.
Nano-scale technological advancement has enabled giga-sized applications for more varieties of technology platforms, including the most popular mobile, IoT and wearable devices. EDA tools must respond to such a trend. On one side, accurately modeling nano-scale devices, including complex physical effects due to small geometry sizes and complicated device structures, has increased in importance and difficulties. Designers now demand more from foundries and have higher standards for PDK and model accuracies. They need to have a deep understanding of the process platform in order to make their chip or IP competitive.
On the other side, giga-scale designs require accurate tools to handle increasing design size. The small supply voltage associated with technology advancement and low-power applications, and the impact of various process variation effects, have reduced available design margins. Furthermore, the big circuit size has made the design sensitive to small leakage current and small noise margin. Accuracy will soon become the bottleneck for giga-scale designs.
However, traditional design tools for big designs, such as FastSPICE for simulation and verification, mostly trade-off accuracy for capacity and performance. One particular example will be the need for accurate memory design, e.g., large instance memory characterization, or full-chip timing and power verification. Because embedded memory may occupy more than 50 percent of chip die area, it will have a significant impact on chip performance and power. For advanced designs, power or timing characterization and verification require much higher accuracy than what FastSPICE can offer –– 5 percent or less errors compared to golden SPICE.
To meet the giga-scale challenges outlined above, the next-generation circuit simulator must offer the high accuracy of a traditional SPICE simulator, and have similar capacity and performance advantages of a FastSPICE simulator. New entrants into the giga-scale SPICE simulation market readily handle the latest process technologies, such as 16/14nm FinFET, which adds further challenges to capacity and accuracy.
One giga-scale SPICE simulator can cover small and large block simulations, characterization, or full-chip verifications, with a pure SPICE engine that guarantees accuracy, and eliminates inconsistencies in the traditional design flow. It can be used as the golden reference for FastSPICE applications, or directly replace FastSPICE for memory designs.
The giga-scale era in chip design is here and giga-scale SPICE simulators are commercially available to meet the need.