Tag Archives: letter-wafer-top

By CHOWDARY YANAMADALA, Senior Vice President of Business Development, ChaoLogix, Gainesville, FL 

Data is ubiquitous today. It is generated, exchanged and consumed at unprecedented rates.

According to Gartner, Internet of Things connected devices (excluding PCs, tablets and smart phones) will grow to 26 billion devices worldwide by 2020—a 30-fold increase from 2009. Sales of these devices will add $1.9 trillion in economic value globally.

Indeed, one of the major benefits of the Internet of Things movement is the connectivity and accessibility of data; however, this also raises concerns about securely managing that data.

Managing data security in hardware

Data security involves essential steps of authentication and encryption. We need to authenticate data generation and data collection sources, and we need to preserve the privacy of the data.

The Internet of Things comprises a variety of components: hardware, embedded software and services associated with the “things.” Data security is needed at each level.

Hardware security is generally implemented in the chips that make up the “things.” The mathematical security of authentication and encryption algorithms is less of a concern because this is not new. The industry has addressed these concerns for several years.

Nonetheless, hackers can exploit implementation flaws in these chips. Side channel attacks (SCAs) are a major threat to data security within integrated circuits (ICs) that are used to hold sensitive data, such as identifying information and secret keys needed for authentication or encryption algorithms. Specific SCAs include differential power analysis (DPA) and differential electro magnetic analysis (DEMA).

There are many published and unpublished attacks on the security of chips deployed in the market, and SCA threats are rapidly evolving, increasing in potency and the ease of mounting the attacks.

These emerging threats render defensive techniques adopted by the IC manufacturers less potent over time, igniting a race between defensive and offensive (threat) techniques. For example, chips that deploy defensive techniques deemed sufficient in 2012 may be less effective in 2014 due to emerging threats. Once these devices are deployed, they become vulnerable to new threats.

Another challenge IC manufacturers face is the complexity of defensive techniques. Often times, defensive techniques that are algorithm or protocol specific are layered to address multiple targeted threats.

This “Band-Aid” approach is tedious and becomes unwieldy to manage. The industry must remember that leaving hardware vulnerable to SCA threats can significantly weaken data security. This vulnerability may manifest itself in the form of revenue loss (counterfeits of consumables), loss of privacy (compromised identification information), breach of authentication (rogue devices in the closed network) and more.

How to increase the permanence of security

A simplified way to look at the SCA problem is as a signal to noise issue. In this case, signal means sensitive data leaked through power signature. Noise is the ambient or manufactured noise added to the system to obfuscate the signal from being extracted from power signature.

Many defensive measures today concentrate on increasing noise in the system to obfuscate the signal. The challenge with this approach is that emerging statis- tical techniques are becoming adept at separating the signal from the noise, thereby decreasing the potency of the deployed defensive techniques.

One way to effectively deal with this problem is to ”weave security into the fabric of design.” SCA threats can be addressed at the source rather than addressing the symptoms. What if we can make the power signature agnostic of the data processed? What if we can build security into the building blocks of design? That would make the security more permanent and simplify its implementation.

A simplified approach of weaving security into the fabric of design involves leveraging a secure standard cell library that is hardened against SCA. Such a library would use analog design techniques to tackle the problem of SCA at the source, diminishing the SCA signal to make it difficult to extract from the power signature.

Leveraging standard cells should be simple since they are the basic building blocks of digital design. As an industry, we cannot afford to bypass these critical steps to defend our data.

By Jeff Dorsch, Contributing Editor

Applied Materials on Wednesday reported that its proposed merger with Tokyo Electron Ltd. (TEL) is still under way, without giving a deadline or expected date of conclusion.

President and CEO Gary Dickerson said the company is “making progress with regulators” and plans to “complete the merger as soon as possible.” He declined to elaborate on that point, on advice of its attorneys.

Applied and TEL teams are working together to fulfill “the strategic opportunity this merger creates,” Dickerson said.

For the fiscal first quarter ended January 25, Applied received orders of $2.27 billion, up 1 percent from the fourth fiscal quarter and down 1 percent from a year earlier. The company posted sales of $2.36 billion, an increase of 4 percent from Q4 and up 8 percent from a year ago. Net income was $338 million, up 21 percent from the previous year’s $279 million.

“Major technology inflections in semiconductor and display are creating new growth opportunities for Applied`s precision materials engineering products and services,” Dickerson said in a statement. “With focus and execution, we are gaining momentum toward our long-term strategic goals, and this progress will be accelerated by our planned merger with Tokyo Electron.”

Applied forecasts sales in the second fiscal quarter will be flat to up a couple of percentage points from Q1. Dickerson said memory chips will drive demand for equipment in the fiscal first half, and the second half will see growth from foundries placing orders for equipment to be used in producing devices with FinFETs.

The building blocks are described that can be used to fabricate other novel device architectures that can take advantage of the unique properties of graphene or other interesting single-layer (i.e., 2D) materials.

BY V. KAUSHIK, N. AGBODO, H. CHUNG, M. HATZISTERGOS, B. JI, P. KHARE, T. LAURSEN, D. LOVELL, A. MESFIN, T. MURRAY, S. NOVAK, H. STAMPER, D. STEINKE, S. VIVEKANAND, T. VO, M. PASSARO AND M. LIEHR, College of Na- noscale Science and Engineering, SUNY Polytechnic Institute, Albany NY.

Graphene is a 2-dimensional sheet of sp2-hybridized carbon atoms with unique physical, mechanical and electrical properties. Commonly found in its multi-layer form, graphite, its isolation as a single-layer has received major attention in the literature for CMOS applications in low-power, high-mobility, analog and radio-frequency devices [1-5]. Single layers of graphene can be obtained by exfoliation from graphite, thermal decomposition on SiC surfaces [6], or chemical vapor deposition (CVD) on metallic surfaces such as copper or nickel. For the evaluation of its compatibility with silicon-based CMOS, it is necessary to study graphene layers on silicon wafers. In this article, we demonstrate the introduction of single-layer graphene — grown by CVD and transferred to 300mm Si wafers — into a state-of-the-art CMOS fab, and the further processing of these wafers using advanced CMOS techniques in order to obtain working graphene-channel FETs. The fabrication steps developed in this work, chosen to minimize impacts to graphene quality during fab-based processing, can serve as building blocks for future research in conventional and novel device architectures.

Graphene growth, etch and transfer

Graphene was grown by low-pressure CVD in a typical tube furnace on commercially available Cu foil at ~950C by CH4 cracking. Prior to graphene transfer, a thermal- release tape was placed on the Cu foil to serve as a support for further processing. The Cu foil was then etched using a mixture of hydrochloric acid, hydrogen peroxide and de-ionized (DI) water. Multiple sequences of this etch were used to minimize re-deposition of the etched copper onto the tape with graphene, followed by a final DI water rinse, leaving only the graphene and tape remaining. The tape and graphene adhesion was then placed on a 300mm Si wafer or a Si wafer capped with SiO2 (SiO2/Si) and heated to remove the tape, resulting in a wafer with graphene on its surface. (Although the use of PMMA [poly methyl acrylate] has been reported [7] as a suitable support film for graphene transfer, we did not use it since the PMMA is typically dissolved in acetone, a solvent that is incompatible with 300mm fab integration due to health and safety considerations.)

Due to the use of Cu foil and laboratory instruments, the graphene growth, etch and transfer processes have the potential to leave residual metal contamination on the wafer. Since the introduction of these wafers into the fab for further processing requires demonstration of low levels of metal contaminants, the handling of the graphene-on-Cu foil and the Cu-etching was performed by avoiding the use of metallic instruments. The detection of metallic ions on the transferred graphene was performed by TXRF (Total-reflection X-Ray Fluorescence), which is a highly surface sensitive technique. Using this feedback, we were able to determine that the use of ceramic scissors for cutting the foil, ceramic tweezers for handling the tape and foil, and a well-ventilated clean laminar-flow hood area were effective in reducing metallic contaminants.

FIGURE 1A shows an optical micrograph of graphene transferred on a SiO2/Si wafer and post-transfer cleans. While graphene covers most of the wafer surface, some gaps were observed due to imperfections in the transfer process. The use of thermal tape can potentially cause tape residue to remain on the graphene as well (shown in Figure 1a), which is partly mitigated by post-transfer cleans.

FIGURE 1B shows TXRF data of the concentration of metallic contaminants after graphene transfer and after post-transfer cleans using HCl chemistry. TXRF spectra of the SiO2/Si wafer after transfer showed high levels of metallic Cu, Fe and Ti. Post-transfer cleans of the wafer reduced the metallic contamination levels to ~5E10 at/ cm2.

Raman spectroscopy is the technique most often used to measure the quality of monolayer graphene [2,8]. High quality graphene shows distinct peaks at 1580cm-1 (G peak) and 2690cm-1 (2D peak). In undamaged graphene the 2D peak is a factor of two higher than the G peak, with this ratio decreasing as the layer accumulates damage. In addition, damaging the graphene causes the appearance of a peak at 1350cm-1 (D peak). These features have been used to monitor the quality of the graphene layers at various stages in our process. FIGURE 1C shows a Raman spectrum of the graphene immediately following the transfer. The intensities of the 2D- and G-peaks are consistent with those for single-layer graphene. The low intensity of the D-peak in FIGURE 1D confirms that wet clean sequences used did not significantly degrade the electrical and physical properties of the graphene.

Graphene 1-A Graphene 1-B Graphene 1-CGraphene 1-D replace

FIGURE 1. Shown is an optical micrograph of the graphene after transfer on to a SiO2/Si wafer and cleans indicating areas with graphene (A), areas with no graphene ((B) and suspected tape residue (C).Figure 1b shows metal contamination levels determined from TXRF spectra for as-transfer and after wet cleans. Figure 1c and d show Raman spectra obtained from graphene on SiO2 wafer before and after wet- cleans respectively showing no degradation in single-layer graphene quality.

Device fabrication: Gate and dielectric formation

A simple MOSFET-like integration scheme using graphene as the channel material was chosen to demonstrate the processing of graphene in our 300mm line. For best device performance, a high quality gate dielectric is required, and several dielectric layers deposited over graphene were evaluated using Raman spectroscopy to observe their effects on graphene quality. Processes that involved high temperatures — e.g., CVD or plasmas — introduced defects into the graphene, as is evident from a D-peaks shown in FIGURES 2A and 2B, thus ruling out typical gate dielectric layers available in CMOS fabs. Atomic layer deposition (ALD) processes have been reported to show poor nucleation on the graphene surface due to a lack of available bonds [9]. Evaporation processes, reported to be effective after depositing a thin metal on top of the graphene which is then oxidized, are not well suited to modern high volume manufacturing fabs. To circumvent these problems, we ‘inverted’ the conventional MOSFET structure using buried gates [10]. In this scheme, tungsten gate electrodes were fabricated in thermal oxide by a damascene process. After these electrodes were in place, a gate-quality 4nm HfO2 dielectric was deposited using an ALD process. Graphene was then transferred onto this HfO2 surface. This approach eliminates the need for a gate-quality dielectric deposition over the graphene. FIGURE 3A shows the process sequence for the device while FIGURE 3B shows a schematic of the device structure.

Graphene 2-A Graphene 2-B Graphene 2-C Graphene 2-D

 

FIGURE 2. Shown are Raman spectra of the graphene layer with D-peak indicative of damage after after a) plasma oxidation and b) PVD metal deposition and oxidation. Figures 2c-d show Raman spectra after c) spin-on dielectric deposition and d) after subsequent bake anneal indicating a reduction 2D/G peak ratio but no D-peak.

In order to process the wafers after graphene transfer, we capped the graphene with a spin-on dielectric film to protect its quality. We were thus able to avoid the above mentioned issues of high temperature, plasma processing, and nucleation. The spin-on dielectric film was ~35nm thick and allowed the graphene layer to withstand higher temperature and plasma processes, including film depositions, anneals, and reactive ion etching (RIE). With no discernible D-peak, the Raman spectra in FIGURES 2C AND 2D show that the capping layer preserved graphene quality.

Figure 3a: Schematic of process steps used in the fabrication of graphene-channel devices.

Figure 3a: Schematic of process steps used in the fabrication of graphene-channel devices.

Graphene 3-B

FIGURE 3B. Schematic of device structure to exercise process steps.

Subsequently, a photolithography step followed by an RIE process was used to pattern the active area and to remove the capping dielectric, graphene, and gate oxide from the field area (FIGURE 4A). With the active graphene area patterned, the process then moved to the contact module.

Device fabrication: Contact formation

The formation of metal contacts to graphene is one of the more challenging aspects of fabricating a graphene device in a modern fab. Most of the available literature reports the use of e-beam evaporation and lift-off techniques to form metal contacts to graphene [11,12]. However, these techniques and the typical metals used (Au, Au-Pd, Cr) are more suited to a lab environment than a high-volume Si fab. We used a conventional damascene contact process and a plated Cu-based metallurgy, which introduced challenges associated with the contact open etch and cleans, and during metal depositions.

In this study, the contact stack consisted of conventional nitride and oxide that was planarized using chemical-mechanical polishing (CMP). Immersion lithography was used to define contacts with dimensions ranging from 100nm to 350nm. After the dry etch process, the wafers were cleaned using a wet chemistry compatible with the exposed graphene. A modified metal barrier/liner/seed process was then used to initiate the metallization process in the contacts, followed by CMP of the metal overburden. Contact to the graphene was made along the circumference of the contact plug, which has been reported to be more effective than top contact schemes [13]. The contact module was followed by a standard metallization module using a damascene copper process to fabricate the pads for automated in-line testing of the graphene FETs. Future work will include further optimization of etches and variations of liner metallurgy and contact architecture (top vs. edge) to study the effects on contact resistance and device performance.

Electrical test results

The devices were tested using DC current-voltage sweeps on various graphene-channel MOSFETs (GFETs) using a standard parametric tester. Two-point transport measurements demonstrated the MOSFETs’ gate-voltage-induced resistance modulations. A typical transport curve is shown in FIGURE 4D. Transistor behavior was observed in GFET devices with various graphene channel widths ranging from 1μm to 10μm. GFET channel widths ranging from 50nm to 10μm were controlled by the patterned back gates. Low operating voltages (with Vg swept from -1V to 1V) were achieved due to our utilization of a thin high-k dielectric. The Dirac point in FIGURE 4D is shown to be nominally at 0V with the gate resolution at 50mV, which is the step-size of the sweep. Since we only used 2-point testing, the measured total resistance includes the channel resistance, series resistance of graphene area not covered by the gate, and the graphene-metal contact resistance. While this limits our ability to characterize the intrinsic GFET transport property, it does point to the challenges for the fabrications of product-like devices; i.e., significant reductions of contact and series resistances are definitely required.

Graphene 4-A Graphene 4-B Graphene 4-C Graphene 4-D

 

FIGURE 4:  4a shows XSEM of metal gate and active region after pattern and etch. Figure 4b shows lower magnification view of copper contacts through insulator and metal at top. Figure 4c shows higher magnification view of 100nm contact. Dotted line shows expected location of graphene. Figure 4d shows a transport curve of graphene FETs using a 2-point transport measurement on an in-line parametric tester. The gate voltage is controlled by the patterned back gate. The total resistance includes the channel resistance, series resistance of graphene area that is not covered by the gate, and the graphene-metal contact resistance. 

Conclusion

We have demonstrated that working MOSFETs with graphene channels can be fabricated in a conventional 300mm CMOS fabrication line using state-of-the-art process tools. The building blocks shown here can be used to fabricate other novel device architectures that can take advantage of the unique properties of graphene or other interesting single-layer (i.e., 2D) materials. Further optimization of graphene transfer and contact schemes intended to reduce overall resistance are ongoing and will be reported in subsequent publications.

Acknowledgement

The authors acknowledge the support of Profs. Alain Diebold and Ji-Ung Lee.

References

1. ‘‘Electric field effect in atomically thin carbon films,’’ by K. S. Novoselov, A. K. Geim, S. V. Morozov, et al., in Science, vol. 306, pp. 666–669, 2004.
2. “Honeycomb Carbon: A review of graphene” by M.J. Allen, V.C. Tung & R.B. Kaner in Chemical Review, 110, 132-145, 2010.
3. “Graphene Electronics: Materials, Devices, and Circuits” by Y. Wu, D.B. Farmer, F. Xia, and P. Avouris in Proceedings of the IEEE, Vol. 101, No. 7, 2013.
4. “Unusual transport properties in carbon- based nanoscale materials: nanotubes and graphene” by M. Purewal, Y. Zhang and P. Kim in Phys. Stat. Sol. (b) 243, No. 13, 3418–3422 (2006)
5. “On the importance of band gap formation in graphene for analog device application” by S. Das and J. Appenzeller, in IEEE Trans. Nanotechnol., vol. 10, no. 5, pp.
1093–1098, Sep. 2011.
6. “Epitaxial Graphene: Designing a new electronics material” by W. De Heer, C. Berger, X. Wu et al., in ECS Transactions
19 (5) 95-105 2009 [7]. “Toward Clean and Crackless Transfer of Graphene” by X. Liang, B. Sperling,I. Calizo, et al., in ACS Nano. VOL. 5 ’ NO. 11 ’ 9144–9153 ’ 2011
7. “ Toward Clean and Crackless Transfer of Graphene” by X. Liang, B. Sperling,I. Calizo, et al., in ACS Nano. VOL. 5 ’ NO. 11 ’ 9144–9153 ’ 2011
8. “Raman Spectrum of Graphene and Graphene Layers”, A. C. Ferrari, J. C. Meyer, V. Scardaci, et al. in Phys. Rev. Lett. 97, 187401 (2006).
9. “Scaling of Al2O3 dielectric for graphenefield-effect transistors”, B. Fallahazad, K. Lee, G. Lian, S. Kim, C. Corbet, D. Ferrer, L. Colombo and E. Tutuc in Applied Physics Letters, 100, 093112 (2012)
10. “Angle-dependent carrier transmission in graphene p-n junctions”, by S. Sutar, E. Comfort and J-U. Lee in ACS Nano. Lett. Vol. 12, pg 4460 (2012).
11. “Contacting graphene” by J.A. Robinson, M. LaBella, M. Zhu et al., in Applied Physics Letters, 98, 053103, 2011.
12. “Understanding the Electrical Impact of Edge Contacts in Few-Layer Graphene” by T. Chu & Z. Chen in ACS Nano. Vol. 8 ’ No. 4 3584–3589 2014.
13. “One-Dimensional Electrical Contact to a Two-Dimensional Material” by L. Wang, I. Meric, P. Y. Huang et al, in Science 1 November2013:Vol.342 no.6158 pp.614-617

The authors are with the College of Nanoscale Science and Engineering, SUNY Polytechnic Institute, 257 Fuller Rd, Albany NY 12203. 

The Semiconductor Industry Association (SIA), representing U.S. leadership in semiconductor manufacturing and design, today announced that the global semiconductor industry posted record sales totaling $335.8 billion in 2014, an increase of 9.9 percent from the 2013 total of $305.6 billion. Global sales for the month of December 2014 reached $29.1 billion, marking the strongest December on record, while December 2014 sales in the Americas increased 16 percent compared to December 2013. Fourth quarter global sales of $87.4 billion were 9.3 percent higher than the total of $79.9 billion from the fourth quarter of 2013. Total sales for the year exceeded projections from the World Semiconductor Trade Statistics (WSTS) organization’s industry forecast. All monthly sales numbers are compiled by WSTS and represent a three-month moving average.

“The global semiconductor industry posted its highest-ever sales in 2014, topping $335 billion for the first time thanks to broad and sustained growth across nearly all regions and product categories,” said John Neuffer, president and CEO, Semiconductor Industry Association. “The industry now has achieved record sales in two consecutive years and is well-positioned for continued growth in 2015 and beyond.”

Several semiconductor product segments stood out in 2014. Logic was the largest semiconductor category by sales, reaching $91.6 billion in 2014, a 6.6 percent increase compared to 2013. Memory ($79.2 billion) and micro-ICs ($62.1 billion) – a category that includes microprocessors – rounded out the top three segments in terms of sales revenue. Memory was the fastest growing segment, increasing 18.2 percent in 2014. Within memory, DRAM performed particularly well, increasing by 34.7 percent year-over-year. Other fast-growing product segments included power transistors, which reached $11.9 billion in sales for a 16.1 percent annual increase, discretes ($20.2 billion/10.8 percent increase), and analog ($44.4 billion/10.6 percent increase).

Annual sales increased in all four regional markets for the first time since 2010. The Americas market showed particular strength, with sales increasing by 12.7 percent in 2014. Sales were also up in Asia Pacific (11.4 percent), Europe (7.4 percent), and Japan (0.1 percent), marking the first time annual sales in Japan increased since 2010.

“The U.S. market demonstrated particular strength in 2014, posting double-digit growth to lead all regions,” continued Neuffer. “With the new Congress now underway, we urge policymakers to help foster continued growth by enacting policies that promote U.S. innovation and global competitiveness.”

December 2014
Billions
Month-to-Month Sales
Market Last Month Current Month % Change
Americas 6.53 6.73 3.1%
Europe 3.19 3.01 -5.8%
Japan 2.93 2.80 -4.6%
Asia Pacific 17.12 16.59 -3.1%
Total 29.77 29.13 -2.2%
Year-to-Year Sales
Market Last Year Current Month % Change
Americas 5.80 6.73 16.0%
Europe 2.96 3.01 1.6%
Japan 2.93 2.80 -4.4%
Asia Pacific 14.96 16.59 10.9%
Total 26.65 29.13 9.3%
Three-Month-Moving Average Sales
Market Jun/Jul/Aug Sep/Oct/Nov % Change
Americas 6.06 6.73 11.1%
Europe 3.21 3.01 -6.4%
Japan 3.03 2.80 -7.7%
Asia Pacific 16.93 16.59 -2.0%
Total 29.23 29.13 -0.4%

Leading industry experts provide their perspectives on what to expect in 2015. 3D devices and 3D integration, rising process complexity and “big data” are among the hot topics.

Entering the 3D era

Ghanayem_SSteve Ghanayem, vice president, general manager, Transistor and Interconnect Group, Applied Materials

This year, the semiconductor industry celebrates the 50th anniversary of Moore’s Law. We are at the onset of the 3D era. We expect to see broad adoption of 3D FinFETs in logic and foundry. Investments in 3D NAND manufacturing are expanding as this technology takes hold. This historic 3D transformation impacting both logic and memory devices underscores the aggressive pace of technology innovation in the age of mobility. The benefits of going 3D — lower power consumption, increased processing performance, denser storage capacity and smaller form factors — are essential for the industry to enable new mobility, connectivity and Internet of Things applications.

The semiconductor equipment industry plays a major role in enabling this 3D transformation through new materials, capabilities and processes. Fabricating leading-edge 3D FinFET and NAND devices adds complexity in chip manufacturing that has soared with each node transition. The 3D structure poses unique challenges for deposition, etch, planarization, materials modification and selective processes to create a yielding device, requiring significant innovations in critical dimension control, structural integrity and interface preparation. As chips get smaller and more complex, variations accumulate while process tolerances shrink, eroding performance and yields. Chipmakers need cost-effective solutions to rapidly ramp device yield to maintain the cadence of Moore’s Law. Given these challenges, 2015 will be the year when precision materials engineering technologies are put to the test to demonstrate high-volume manufacturing capabilities for 3D devices.

Achieving excellent device performance and yield for 3D devices demands equipment engineering expertise leveraging decades of knowledge to deliver the optimal system architecture with wide process window. Process technology innovation and new materials with atomic-scale precision are vital for transistor, interconnect and patterning applications. For instance, transistor fabrication requires precise control of fin width, limiting variation from etching to lithography. Contact formation requires precision metal film deposition and atomic-level interface control, critical to lowering contact resistance. In interconnect, new materials such as cobalt are needed to improve gap fill and reliability of narrow lines as density increases with each technology node. Looking forward, these precision materials engineering technologies will be the foundation for continued materials-enabled scaling for many years to come.

Increasing process complexity and opportunities for innovation

trafasBrian Trafas, Chief Marketing Officer, KLA-Tencor Corporation

The 2014 calendar year started with promise and optimism for the semiconductor industry, and it concluded with similar sentiments. While the concern of financial risk and industry consolidation interjects itself at times to overshadow the industry, there is much to be positive about as we arrive in the new year. From increases in equipment spending and revenue in the materials market, to record level silicon wafer shipments projections, 2015 forecasts all point in the right direction. Industry players are also doing their part to address new challenges, creating strategies to overcome complexities associated with innovative techniques, such as multipatterning and 3D architectures.

The semiconductor industry continues to explore new technologies, including 3DIC, TSV, and FinFETs, which carry challenges that also happen to represent opportunities. First, for memory as well as foundry logic, the need for multipatterning to extend lithography is a key focus. We’re seeing some of the value of a traditional lithography tool shifting into some of the non-litho processing steps. As such, customers need to monitor litho and non-litho sources of error and critical defects to be able to yield successfully at next generation nodes.  To enable successful yields with decreasing patterning process windows, it is essential to address all sources of error to provide feed forward and feed backward correctly.

The transition from 2D to 3D in memory and logic is another focus area.  3D leads to tighter process margins because of the added steps and complexity.  Addressing specific yield issues associated with 3D is a great opportunity for companies that can provide value in addressing the challenges customers are facing with these unique architectures.

The wearable, intelligent mobile and IoT markets are continuing to grow rapidly and bring new opportunities. We expect the IoT will drive higher levels of semiconductor content and contribute to future growth in the industry. The demand for these types of devices will add to the entire value chain including semiconductor devices but also software and services.  The semiconductor content in these devices can provide growth opportunities for microcontrollers and embedded processors as well sensing semiconductor devices.

Critical to our industry’s success is tight collaboration among peers and with customers. With such complexity to the market and IC technology, it is very important to work together to understand challenges and identify where there are opportunities to provide value to customers, ultimately helping them to make the right investments and meet their ramps.

Controlling manufacturing variability key to success at 10nm

Rick_Gottscho_Lam_ResearchRichard Gottscho, Ph.D., Executive Vice President, Global Products, Lam Research Corporation 

This year, the semiconductor industry should see the emergence of chip-making at the 10nm technology node. When building devices with geometries this small, controlling manufacturing process variability is essential and most challenging since variation tolerance scales with device dimensions.

Controlling variability has always been important for improving yield and device performance. With every advance in technology and change in design rule, tighter process controls are needed to achieve these benefits. At the 22/20nm technology node, for instance, variation tolerance for CDs (critical dimensions) can be as small as one nanometer, or about 14 atomic layers; for the 10nm node, it can be less than 0.5nm, or just 3 – 4 atomic layers. Innovations that drive continuous scaling to sub-20nm nodes, such as 3D FinFET devices and double/quadruple patterning schemes, add to the challenge of reducing variability. For example, multiple patterning processes require more stringent control of each step because additional process steps are needed to create the initial mask:  more steps mean more variability overall. Multiple patterning puts greater constraints not only on lithography, but also on deposition and etching.

Three types of process variation must be addressed:  within each die or integrated circuit at an atomic level, from die to die (across the wafer), and from wafer to wafer (within a lot, lot to lot, chamber to chamber, and fab to fab). At the device level, controlling CD variation to within a few atoms will increasingly require the application of technologies such as atomic layer deposition (ALD) and atomic layer etching (ALE). Historically, some of these processes were deemed too slow for commercial production. Fortunately, we now have cost-effective solutions, and they are finding their way into volume manufacturing.

To complement these capabilities, advanced process control (APC) will be incorporated into systems to tune chemical and electrical gradients across the wafer, further reducing die-to-die variation. In addition, chamber matching has never been more important. Big data analytics and subsystem diagnostics are being developed and deployed to ensure that every system in a fab produces wafers with the same process results to atomic precision.

Looking ahead, we expect these new capabilities for advanced variability control to move into production environments sometime this year, enabling 10nm-node device fabrication.

2015: The year 3D-IC integration finally comes of age

SONY DSCPaul Lindner, Executive Technology Director, EV Group

2015 will mark an important turning point in the course of 3D-IC technology adoption, as the semiconductor industry moves 3D-IC fully out of development and prototyping stages onto the production floor. In several applications, this transition is already taking place. To date, at least a dozen components in a typical smart phone employing 3D-IC manufacturing technologies. While the application processor and memory in these smart devices continue to be stacked at a package level (POP), many other device components—including image sensors, MEMS, RF front end and filter devices—are now realizing the promise of 3D-IC, namely reduced form factor, increased performance and most importantly reduced manufacturing cost.

The increasing adoption of wearable mobile consumer products will also accelerate the need for higher density integration and reduced form factor, particularly with respect to MEMS devices. More functionality will be integrated both within the same device as well as within one package via 3D stacking. Nine-axis international measurement units (IMUs, which comprise three accelerometers, three gyroscopes and three magnetic axes) will see reductions in size, cost, power consumption and ease of integration.

On the other side of the data stream at data centers, expect to see new developments around 3D-IC technology coming to market in 2015 as well. Compound semiconductors integrated with photonics and CMOS will trigger the replacement of copper wiring with optical fibers to drive down power consumption and electricity costs, thanks to 3D stacking technologies. The recent introduction of stacked DRAM with high-performance microprocessors, such as Intel’s Knights Landing processor, already demonstrate how 3D-IC technology is finally delivering on its promises across many different applications.

Across these various applications that are integrating stacked 3D-IC architectures, wafer bonding will play a key role. This is true for 3D-ICs integrating through silicon vias (TSVs), where temporary bonding in the manufacturing flow or permanent bonding at the wafer-level is essential. It’s the case for reducing power consumption in wearable products integrating MEMS devices, where encapsulating higher vacuum levels will enable low-power operation of gyroscopes. Finally, wafer-level hybrid fusion bonding—a technology that permanently connects wafers both mechanically and electrically in a single process step and supports the development of thinner devices by eliminating adhesive thickness and the need for bumps and pillars—is one of the promising new processes that we expect to see utilized in device manufacturing starting in 2015.

2015: Curvilinear Shapes Are Coming

Aki_Fujimura_D2S_midresAki Fujimura, CEO, D2S

For the semiconductor industry, 2015 will be the start of one of the most interesting periods in the history of Moore’s Law. For the first time in two decades, the fundamental machine architecture of the mask writer is going to change over the next few years—from Variable Shaped Beam (VSB) to multi-beam. Multi-beam mask writing is likely the final frontier—the technology that will take us to the end of the Moore’s Law era. The write times associated with multi-beam writers are constant regardless of the complexity of the mask patterns, and this changes everything. It will open up a new world of opportunities for complex mask making that make trade-offs between design rules, mask/wafer yields and mask write-times a thing of the past. The upstream effects of this may yet be underappreciated.

While high-volume production of multi-beam mask writing machines may not arrive in time for the 10nm node, the industry is expressing little doubt of its arrival by the 7nm node. Since transitions of this magnitude take several years to successfully permeate through the ecosystem, 2015 is the right time to start preparing for the impact of this change.  Multi-beam mask writing enables the creation of very complex mask shapes (even ideal curvilinear shapes). When used in conjunction with optical proximity correction (OPC), inverse lithography technology (ILT) and pixelated masks, this enables more precise wafer writing with improved process margin.  Improving process margin on both the mask and wafer will allow design rules to be tighter, which will re-activate the transistor-density benefit of Moore’s Law.

The prospect of multi-beam mask writing makes it clear that OPC needs to yield better wafer quality by taking advantage of complex mask shapes. This clear direction for the future and the need for more process margin and overlay accuracy at the 10nm node aligns to require complex mask shapes at 10nm. Technologies such as model-based mask data preparation (MB-MDP) will take center stage in 2015 as a bridge to 10nm using VSB mask writing.

Whether for VSB mask writing or for multi-beam mask writing, the shapes we need to write on masks are increasingly complex, increasingly curvilinear, and smaller in minimum width and space. The overwhelming trend in mask data preparation is the shift from deterministic, rule-based, geometric, context-independent, shape-modulated, rectangular processing to statistical, simulation-based, context-dependent, dose- and shape-modulated, any-shape processing. We will all be witnesses to the start of this fundamental change as 2015 unfolds. It will be a very exciting time indeed.

Data integration and advanced packaging driving growth in 2015

mike_plisinski_hiMike Plisinski, Chief Operating Officer, Rudolph Technologies, Inc.

We see two important trends that we expect to have major impact in 2015. The first is a continuing investment in developing and implementing 3D integration and advanced packaging processes, driven not only by the demand for more power and functionality in smaller volumes, but also by the dramatic escalation in the number and density I/O lines per die. This includes not only through silicon vias, but also copper pillar bumps, fan-out packaging, hyper-efficient panel-based packaging processes that use dedicated lithography system on rectangular substrates. As the back end adopts and adapts processes from the front end, the lines that have traditionally separated these areas are blurring. Advanced packaging processes require significantly more inspection and control than conventional packaging and this trend is still only in its early stages.

The other trend has a broader impact on the market as a whole. As consumer electronics becomes a more predominant driver of our industry, manufacturers are under increasing pressure to ramp new products faster and at higher volumes than ever before. Winning or losing an order from a mega cell phone manufacturer can make or break a year, and those orders are being won based on technology and quality, not only price as in the past. This is forcing manufacturers to look for more comprehensive solutions to their process challenges. Instead of buying a tool that meets certain criteria of their established infrastructure, then getting IT to connect it and interpret the data and write the charts and reports for the process engineers so they can use the tool, manufacturers are now pushing much of this onto their vendors, saying, “We want you to provide a working tool that’s going to meet these specs right away and provide us the information we need to adjust and control our process going forward.” They want information, not just data.

Rudolph has made, and will continue to make, major investments in the development of automated analytics for process data. Now more than ever, when our customer buys a system from us, whatever its application – lithography, metrology, inspection or something new, they also want to correlate the data it generates with data from other tools across the process in order to provide more information about process adjustments. We expect these same customer demands to drive a new wave of collaboration among vendors, and we welcome the opportunity to work together to provide more comprehensive solutions for the benefit of our mutual customers.

Process Data – From Famine to Feast

Jack Hager Head ShotJack Hager, Product Marketing Manager, FEI

As shrinking device sizes have forced manufacturers to move from SEM to TEM for analysis and measurement of critical features, process and integration engineers have often found themselves having to make critical decisions using meagre rations of process data. Recent advances in automated TEM sample preparation, using FIBs to prepare high quality, ultra-thin site-specific samples, have opened the tap on the flow of data. Engineers can now make statistically-sound decisions in an environment of abundant data. The availability of fast, high-quality TEM data has whet their appetites for even more data, and the resulting demand is drawing sample preparation systems, and in some cases, TEMs, out of remote laboratories and onto the fab floor or in a “near-line” location. With the high degree of automation of both the sample preparation and TEM, the process engineers, who ultimately consume the data, can now own and operate the systems that generate this data, thus having control over the amount of data created.

The proliferation of exotic materials and new 3D architectures at the most advanced nodes has dramatically increased the need for fast, accurate process data. The days when performance improvements required no more than a relatively simple “shrink” of basically 2D designs using well-understood processes are long gone. Complex, new processes require additional monitoring to aide in process control and failure analysis troubleshooting. Defects, both electrical and physical, are not only more numerous, but typically smaller and more varied. These defects are often buried below the exposed surface which limits traditional inline defect-monitoring equipment effectiveness. This has resulted in renewed challenges in diagnosing their root causes. TEM analysis now plays a more prevalent role providing defect insights that allow actionable process changes.

While process technologies have changed radically, market fundamentals have not. First to market still commands premium prices and builds market share. And time to market is determined largely by the speed with which new manufacturing processes can be developed and ramped to high yields at high volumes. It is in these critical phases of development and ramp that the speed and accuracy of automated sample preparation and TEM analysis is proving most valuable. The methodology has already been adopted by leading manufacturers across the industry – logic and memory, IDM and foundry. We expect the adoption to continue, and with it, the migration of sample preparation and advanced measurement and analytical systems into the fab. 

Diversification of processes, materials will drive integration and customization in sub-fab

Kate Wilson PhotoKate Wilson, Global Applications Director, Edwards

We expect the proliferation of new processes, materials and architectures at the most advanced nodes to drive significant changes in the sub fab where we live. In particular, we expect to see a continuing move toward the integration of vacuum pumping and abatement functions, with custom tuning to optimize performance for the increasingly diverse array of applications becoming a requirement. There is an increased requirement for additional features around the core units such as thermal management, heated N2 injection, and precursor treatment pre- and post-pump that also need to be managed.

Integration offers clear advantages, not only in cost savings but also in safety, speed of installation, smaller footprint, consistent implementation of correct components, optimized set-ups and controlled ownership of the process effluents until they are abated reliably to safe levels. The benefits are not always immediately apparent. Just as effective integration is much more than simply adding a pump to an abatement system, the initial cost of an integrated system is more than the cost of the individual components. The cost benefits in a properly integrated system accrue primarily from increased efficiencies and reliability over the life of the system, and the magnitude of the benefit depends on the complexity of the process. In harsh applications, including deposition processes such as CVD, Epi and ALD, integrated systems provide significant improvements in uptime, service intervals and product lifetimes as well as significant safety benefits.

The trend toward increasing process customization impacts the move toward integration through its requirement that the integrator have detailed knowledge of the process and its by-products. Each manufacturer may use a slightly different recipe and a small change in materials or concentrations can have a large effect on pumping and abatement performance. This variability must be addressed not only in the design of the integrated system but also in tuning its operation during initial commissioning and throughout its lifetime to achieve optimal performance. Successful realization of the benefits of integration will rely heavily on continuing support based on broad application knowledge and experience.

Giga-scale challenges will dominate 2015

Dr. Zhihong Liu

Dr. Zhihong Liu, Executive Chairman, ProPlus Design Solutions, Inc.

It wasn’t all that long ago when nano-scale was the term the semiconductor industry used to describe small transistor sizes to indicate technological advancement. Today, with Moore’s Law slowing down at sub-28nm, the term more often heard is giga-scale due to a leap forward in complexity challenges caused in large measure by the massive amounts of big data now part of all chip design.

Nano-scale technological advancement has enabled giga-sized applications for more varieties of technology platforms, including the most popular mobile, IoT and wearable devices. EDA tools must respond to such a trend. On one side, accurately modeling nano-scale devices, including complex physical effects due to small geometry sizes and complicated device structures, has increased in importance and difficulties. Designers now demand more from foundries and have higher standards for PDK and model accuracies. They need to have a deep understanding of the process platform in order to  make their chip or IP competitive.

On the other side, giga-scale designs require accurate tools to handle increasing design size. The small supply voltage associated with technology advancement and low-power applications, and the impact of various process variation effects, have reduced available design margins. Furthermore, the big circuit size has made the design sensitive to small leakage current and small noise margin. Accuracy will soon become the bottleneck for giga-scale designs.

However, traditional design tools for big designs, such as FastSPICE for simulation and verification, mostly trade-off accuracy for capacity and performance. One particular example will be the need for accurate memory design, e.g., large instance memory characterization, or full-chip timing and power verification. Because embedded memory may occupy more than 50 percent of chip die area, it will have a significant impact on chip performance and power. For advanced designs, power or timing characterization and verification require much higher accuracy than what FastSPICE can offer –– 5 percent or less errors compared to golden SPICE.

To meet the giga-scale challenges outlined above, the next-generation circuit simulator must offer the high accuracy of a traditional SPICE simulator, and have similar capacity and performance advantages of a FastSPICE simulator. New entrants into the giga-scale SPICE simulation market readily handle the latest process technologies, such as 16/14nm FinFET, which adds further challenges to capacity and accuracy.

One giga-scale SPICE simulator can cover small and large block simulations, characterization, or full-chip verifications, with a pure SPICE engine that guarantees accuracy, and eliminates inconsistencies in the traditional design flow.  It can be used as the golden reference for FastSPICE applications, or directly replace FastSPICE for memory designs.

The giga-scale era in chip design is here and giga-scale SPICE simulators are commercially available to meet the need.

Underdog DRAM


January 20, 2015

By Christian G. Dieseldorff, Industry Research & Statistics Group, SEMI

The DRAM sector experienced a major decline during and following the 2008-2009 financial crisis and eventually contracted — both the number of suppliers and installed fab production capacity. According to the SEMI World Fab Forecast Report, the outlook is now more positive as DRAM bit demand is on the rise and average selling prices improved in both 2013 and 2014.  Installed capacity is expected to emerge from negative to positive territory by the end of 2016, but factors for growth are complicated by complex technology issues.

For five years before the economic downturn, yearly growth rates for installed fab capacity trended in high double digits.  Looking back to 2007, eleven major companies produced DRAM chips in about 40 facilities globally, with installed capacity growing by 40 to 50 percent year-over-year from 2003 to 2007.

Since that time, the number of companies shrank from 11 players to six, with only 20 facilities in production and three major players (Samsung, Micron and SK Hynix) as the industry consolidated and contracted (see Figure 1 below, from SEMI World Fab Forecast report).  Qimonda, Promos and Powerchip left the scene, while Elpida and Rexchip were acquired by Micron. In addition, some front end fabs were converted from DRAM to Logic, Flash or other purposes.

SEMI--DRAM Facilities

Figure 1: Major DRAM companies (including Inotera) operating chip facilities (Source: SEMI, 2015)

The smaller number of key suppliers has stabilized the DRAM investment cycle and increasingly the manufacturers focus investments on market demand, not on production share gain. Meanwhile, DRAM bit demand is growing for applications such as mobile and infrastructure/servers.  One leading memory company has predicted CAGR of 27 percent for bit growth from 2013 to 2017.

Obstacle: New Paradigm is a Loss of Capacity

SEMI’s tracking of fab data reveal that when a company transitions a fab to the next leading edge technology there is a capacity loss.  Increased complexity and more process steps mean that these fabs produce fewer wafers per square foot of cleanroom.   This trend affects all industry segments, beginning at the 30/28nm node and smaller, and has been observed since 2012.  Depending on the age of the fab and product type, this loss can be significant, as much as 10-20 percent (See Figure 2).

Figure 2: Blue line shows that existing DRAM fabs lose capacity over time when transitioning to next technology node, while the red line shows new DRAM facilities adding capacity. (Source: SEMI, 2015)

Figure 2: Blue line shows that existing DRAM fabs lose capacity over time when transitioning to next technology node, while the red line shows new DRAM facilities adding capacity. (Source: SEMI, 2015)

SEMI’s World Fab Forecast report tracks nine fabs following this pattern: significant loss of capacity when transitioning to the next leading edge technology node.  From 2014 to 2016, existing DRAM fabs are expected to lose a total of about 25,000 wafers per month, every year when transitioning to next leading edge technology node.

To compensate for this and to meet expected bit demand, the industry is beginning to add new capacity with new fabs and lines. By 2015, three or four new fabs or lines will be in operation.  Of course, these will require time to ramp up; meaning that net capacity change likely will not shift from negative to positive territory until 2016, when about 3 percent growth is forecast.  Figure 3 illustrates how this could potentially affect worldwide DRAM capacity.

Figure 3: Worldwide DRAM capacity for Front End facilities in 300mm equivalent wafers per month and change rate in percent (Source: SEMI, 2015)

Figure 3: Worldwide DRAM capacity for Front End facilities in 300mm equivalent wafers per month and change rate in percent (Source: SEMI, 2015)

The worldwide loss of DRAM capacity from 2010 to 2014 is about 25 percent.  The loss of capacity due to technology upgrade kicks in about 2013 timeframe. Before that the loss is due to consolidations, closure and change of product types.

Obstacle: What’s Next, after 15nm?

Shrinking the DRAM nodes has become increasingly difficult.  As most companies produce in volume 30nm-25nm, some companies began already to offer 21/20nm node.  The next stage beyond that is only just being explored.  Will the industry see another shrink down to 1Ynm?  Or is this too challenging, and for most, not economically feasible?  Other technologies may move forward to eventually replace conventional DRAM, such as non-volatile memories like MRAM (Magnetic RAM), FeRAM (Ferro-electric RAM) and ReRAM (Resistive RAM), and PRAM or PCRAM (Phase-Change RAM).  As these technologies surface, DRAM capacity may be challenged again.

In summary, DRAM, the underdog, comes from behind and appears to promise positive growth by 2016. With the introduction of new technologies, it remains to be seen how DRAM capacity will be impacted and how much new wafer capacity will be needed. The SEMI World Fab Forecast Report lists over 40 facilities making DRAM products. Many facilities have major spending for equipment and construction planned for 2015. Learn more at www.semi.org/MarketInfo/FabDatabase  and www.youtube.com/user/SEMImktstats.

SEMI World Fab Forecast Report

SEMI’s World Fab Forecast reports lists over 40 facilities making DRAM products. 20 of these are dedicated DRAM facilities and over 30 of these have major spending in 2015 for equipment and construction.

The SEMI World Fab Forecast uses a bottom-up approach methodology, providing high-level summaries and graphs, and in-depth analyses of capital expenditures, capacities, technology and products by fab. Additionally, the database provides forecasts for the next 18 months by quarter. These tools are invaluable for understanding how the semiconductor manufacturing will look in 2014 and 2015, and learning more about capex for construction projects, fab equipping, technology levels, and products.

The SEMI Worldwide Semiconductor Equipment Market Subscription (WWSEMS) data tracks only new equipment for fabs and test and assembly and packaging houses.  The SEMI World Fab Forecast and its related Fab Database reports track any equipment needed to ramp fabs, upgrade technology nodes, and expand or change wafer size, including new equipment, used equipment, or in-house equipment. Also check out the Opto/LED Fab Forecast. Learn more about the SEMI fab databases at: www.semi.org/MarketInfo/FabDatabase and www.youtube.com/user/SEMImktstats

SUNY Polytechnic Institute (SUNY Poly) yesterday announced the SUNY Board of Trustees has appointed Dr. Alain Kaloyeros as the founding President of SUNY Poly.

“Dr. Alain Kaloyeros has led SUNY’s College of Nanoscale Science and Engineering since its inception, helping to make this first-of-its-kind institution a global model and position New York State as a leader in the nanotechnology-driven economy of the 21st century,” said SUNY Board Chairman H. Carl McCall. “It is only fitting that Dr. Kaloyeros be the one to build that model and bring it to scale through the continued development and expansion of SUNY Polytechnic Institute.”

“As the visionary who built CNSE into a world-class, high-tech, and globally recognized academic and economic development juggernaut, Dr. Alain Kaloyeros is the clear choice to lead SUNY Polytechnic Institute into the future,” said SUNY Chancellor Nancy L. Zimpher. “The unprecedented statewide expansion of the campus’ unique model and continued strong partnership with Governor Andrew Cuomo is testament to SUNY’s promise as New York’s economic engine and stature as an affordable, world-class educational institution. I am confident that, as its president, Dr. Kaloyeros will continue to build on SUNY Poly’s success and contributions to New York.”

“SUNY Polytechnic Institute is a revolutionary discovery and education model with two coequal campuses in Utica and Albany, and a key component of Governor Cuomo’s vision for high-tech innovation, job creation, and economic development in New York State.  I am privileged and humbled to be selected for the honor of leading this world-class institution and its talented and dedicated faculty, staff, and students,” said Dr. Kaloyeros.  “I would like to extend my sincere gratitude to the Governor, Chairman Carl McCall, the SUNY Board of Trustees, and Chancellor Nancy Zimpher for their continued confidence and support.”

Dr. Kaloyeros received his Ph.D. in Experimental Condensed Matter Physics from the University of Illinois at Urbana-Champaign in 1987.  A year later, Governor Mario M. Cuomo recruited Dr. Kaloyeros under the SUNY Graduate Research Initiative.  Since then, Dr. Kaloyeros has been actively involved in the development and implementation of New York’s high-tech strategy to become a global leader in the nanotechnology-driven economy of the 21st Century.

A critical cornerstone of New York’s high-technology strategy has been the establishment of the Colleges of Nanoscale Science and Engineering (CNSE) at SUNY Poly as a truly global resource that enables pioneering research and development, technology deployment, education, and commercialization for the international nanoelectronics industry.  CNSE was originally founded in April 2004 in response to the rapid changes and evolving needs in the educational and research landscapes brought on by the emergence of nanotechnology.  Under Dr. Kaloyeros’ leadership, CNSE has generated over $20B in public and private investments.

In 2014, CNSE merged with the SUNY Institute of Technology to form SUNY Poly, which today represents the world’s most advanced university-driven research enterprise, offering students a one-of-a-kind academic experience and providing over 300 corporate partners with access to an unmatched ecosystem for leading-edge R&D and commercialization of nanoelectronics and nanotechnology innovations.

 

The Internet of Everything, cloud computing/big data and 3-D printing are the three technologies most likely to transform the world during the next five years, according to IHS Technology.

“We know that technology has the capability to change the world: from the Gutenberg printing press to the steam engine to the microchip,” said Ian Weightman, vice president, research & operations, IHS Technology. “But how can we determine which technologies are likely to have the greatest potential to transform the future of the human race? What is the process to distinguish among the innovations that will have limited impact and those that will be remembered as milestones on the path of progress? How can you tell the difference between the VHS and Betamax of tomorrow’s technologies?”

“To answer these questions, IHS Technology gathered its leading experts representing the technology supply chain from electronic components to finished products across applications markets ranging from consumer, media, and telecom; to industrial, medical, and power. These experts were asked to nominate and vote for their top 10 most impactful technologies over the next five years.”

The top three technologies were: 3-D printing in third place; cloud computing/big data at No. 2; and the Internet of Everything coming out on top.

Manufacturing moves to next dimension with 3-D printing

Also called additive manufacturing, 3-D printing encourages design innovation by facilitating the creation of new structures and shapes, and allows limitless product complexity without additional production costs. It also greatly speeds up time to market by making the idea-to-prototype cycle much shorter.

Total revenue for the 3-D printing industry is forecast to grow by nearly 40 percent annually through 2020, when the aggregated market size is expected to exceed $35.0 billion, up from $5.6 billion in 2014.

Cloud computing/big data brings metamorphosis to computing and consumer markets

The cloud has become a ubiquitous description for on-demand provisioning of data, storage, computing power and services that are touching nearly every consumer and enterprise across the globe. Together with data analytics and mobile broadband, the cloud and big data are poised to reshape almost every facet of the consumer digital lifestyle experience and dramatically impact enterprise information technology (IT) strategies, while creating new opportunities and challenges for the various nodes in the entire information, communications and technology (ICT) value chain.

The cloud is transformational in the business landscape, changing the way enterprises interact with their suppliers, customers and developers.

The big data and data analytics segment is a separate but related transformational technology that harnesses the power of the cloud to analyze data for disparate sources to uncover hidden patterns, enable predictive analysis and achieve huge efficiencies in performance.

IHS forecasts that global enterprise IT spending on cloud-based architectures will double to approximately $230 billion in 2017, up from about $115 billion in 2012.

The Internet of Things becomes the Internet of Everything

The world is in the early stages of the Internet of Things (IoT)—a technological evolution that is based on the way that Internet-connected devices can be used to enhance communication, automate complex industrial processes and generate a wealth of information. To provide some context on the magnitude of this evolution, more than 80 billion Internet-connected devices are projected to be in use in 2024, up from less than 20 billion in 2014, as presented in the attached figure.

While the IoT concept is still relatively new, it is already transforming into a broader model: the Internet of Everything (IoE). The metamorphosis covers not just the number of devices but envisages a complete departure from the way these devices have used the Internet in the past.

Most of the connected devices in place today largely require direct human interaction and are used for the consumption of content and entertainment. The majority of the more than 80 billion future connections will be employed to monitor and control systems, machines and objects—including lights, thermostats, window locks and under-the-hood automotive electronics.

Other transformative technologies identified by IHS Technology analysts were:

  • Artificial intelligence
  • Biometrics
  • Flexible displays
  • Sensors
  • Advanced user interfaces
  • Graphene
  • Energy storage and advanced battery technologies

2015-01-12_Connectable_Devices

SEMI today announced details from the SEMI World Fab Forecast Report illuminating the state of the semiconductor manufacturing industry, coincident with convening SEMI’s Industry Strategy Symposium (ISS) in Half Moon Bay, Calif.  Among the insights across the various segments, the changes in the DRAM (Dynamic Random-Access Memory) segment are an example of the significant shifts in capacity and technology that are driving fab capacity and investment.

Based on SEMI World Fab Forecast data, SEMI forecasts a favorable outlook for DRAM as bit demand rises, improving selling prices in 2013 and 2014. The DRAM sector experienced a sharp decline during the 2008/2009 financial crisis and subsequently contracted, both in the number of suppliers and in installed fab production capacity.  However, installed capacity for DRAM is forecast to turn to positive growth by the end of 2016; yet the path to growth is clouded by daunting technology issues.

In the five years prior to the economic downturn, yearly growth rates for installed fab capacity trended in high double digits.  In 2007, eleven major companies produced DRAM chips in approximately 40 facilities globally. Installed capacity increased 40 to 50 percent each year from 2003 to 2007. According to SEMI data, currently only six companies (20 facilities) produce significant capacities of DRAM. The industry has consolidated, with several front end fabs converted from DRAM to Logic, Flash or other purposes.

According to SEMI fab data, a capacity loss often occurs when a fab transitions to the next leading-edge technology.  Increased complexity and more process steps results in fabs producing 10 to 20 percent fewer wafers per square foot of cleanroom; this trend affects virtually all industry segments at the 30/28nm node and below. The SEMI World Fab Forecast report tracks nine fabs following this pattern.  From 2014 to 2016, DRAM fabs are expected to lose a total of about 25,000 wafers per month when transitioning to next leading-edge technology node.

To compensate for this, and to meet expected bit demand, the industry is beginning to add new capacity with new fabs and lines. By 2015, three or four new fabs or lines will be in operation. All will require time to ramp up; meaning that net capacity change likely will not shift from negative to positive growth until 2016, when about 3 percent growth is forecast.  How this potentially could affect worldwide DRAM capacity is illustrated in this figure:

SEMI WW DRAM Capacity Jan 2015

Figure: Worldwide DRAM capacity for Front End facilities in 300mm equivalent wafers per month and annual rate of change in percent

 

The ability to shrink DRAM nodes has become increasingly difficult.  Most companies are at the 21/20nm node now, with leaders at 15nm (1Xnm).  As conventional processing presents less and less opportunities, other technologies may move forward to eventually replace conventional DRAM scaling, such as non-volatile memories like MRAM (Magnetic RAM), FeRAM (Ferro-electric RAM) and ReRAM (Resistive RAM).  As these technologies surface, DRAM capacity may be challenged again.

In summary, DRAM appears to be headed towards positive growth by 2016. With the introduction of new technologies, it remains to be seen how DRAM capacity will be impacted and how much new wafer capacity will be needed.

The SEMI World Fab Forecast Report lists over 40 facilities making DRAM products. Many facilities have major spending for equipment and construction planned for 2015. Learn more at www.semi.org/MarketInfo/FabDatabase and www.youtube.com/user/SEMImktstats.

Worldwide semiconductor market revenue is on track to achieve a 9.4 percent expansion this year, with broad-based growth across multiple chip segments driving the best industry performance since 2010.

Global revenue in 2014 is expected to total $353.2 billion, up from $322.8 billion in 2013, according to a preliminary estimate from IHS Technology (NYSE: IHS). The nearly double-digit-percentage increase follows respectable growth of 6.4 percent in 2013, a decline of more than 2.0 percent in 2012 and a marginal increase of 1.0 percent in 2011. The performance in 2014 represents the highest rate of annual growth since the 33 percent boom of 2010.

“This is the healthiest the semiconductor business has been in many years, not only in light of the overall growth, but also because of the broad-based nature of the market expansion,” said Dale Ford, vice president and chief analyst at IHS Technology. “While the upswing in 2013 was almost entirely driven by growth in a few specific memory segments, the rise in 2014 is built on a widespread increase in demand for a variety of different types of chips. Because of this, nearly all semiconductor suppliers can enjoy good cheer as they enter the 2014 holiday season.”

More information on this topic can be found in the latest release of the Competitive Landscaping Tool from the Semiconductors & Components service at IHS.

Widespread growth

Of the 28 key sub-segments of the semiconductor market tracked by IHS, 22 are expected to expand in 2014. In contrast, only 12 sub-segments of the semiconductor industry grew in 2013.

Last year, the key drivers of the growth of the semiconductor market were dynamic random access memory (DRAM) and data flash memory. These two memory segments together grew by more than 30 percent while the rest of the market only expanded by 1.5 percent.

This year, the combined revenue for DRAM and data flash memory is projected to rise about 20 percent. However, growth in the rest of the market will swell by 6.7 percent to support the overall market increase of 9.4 percent.

In 2013, only eight semiconductor sub-segments grew by 5 percent or more and only three achieved double-digit growth. In 2014, over half of all the sub-segments—i.e., 15—will grow by more than 5 percent and eight markets will grow by double-digit percentages.

This pervasive growth is delivering general benefits to semiconductor suppliers, with 70 percent of chipmakers expected to enjoy revenue growth this year, up from 53 percent in 2013.

The figure below presents the growth of the DRAM and data flash segments compared to the rest of the semiconductor market in 2013 and 2014.

2014-12-18_Semi_Sectors_Growth

Semiconductor successes

The two market segments enjoying the strongest and most consistent growth in the last two years are DRAM and light-emitting diodes (LEDs). DRAM revenue will climb 33 percent for two years in a row in 2013 and 2014. This follows often strong declines in DRAM revenue in five of the last six years.

The LED market is expected to grow by more than 11 percent in 2014. This continues an unbroken period of growth for LED revenues stretching back at least 13 years.

Major turnarounds are occurring in the analog, discrete and microprocessor markets as they will swing from declines to strong growth in every sub-segment. Most segments will see their growth improve by more than 10 percent, compared to the declines experienced in 2013.

Furthermore, programmable logic device (PLD) and digital signal processor (DSP) application-specific integrated circuits (ASICs) will experience dramatic improvements in growth. PLD revenue in 2014 will grow by 10.2 percent compared to 2.1 percent in 2013, and DSP ASICs will rise by 3.8 percent compared to a 31.9 percent collapse in 2013.

Moving on up

Among the top 20 semiconductor suppliers, MediaTek and Avago Technologies attained the largest revenue growth and rise in the rankings in 2014. Both companies benefited from significant acquisitions.

MediaTek is expected to jump up five places to the 10th rank and become the first semiconductor company headquartered in Taiwan to break into the Top 10. Avago Technologies is projected to jump up eight positions in the rankings to No. 15.

The strongest growth by a semiconductor company based purely on organic revenue increase is expected to be achieved by SK Hynix, with projected growth of nearly 23 percent.

No. 13-ranked Infineon has announced its plan to acquire International Rectifier. If that acquisition is finalized in 2014 the combined companies would jump to No. 10 in the overall rankings and enjoy 16 percent combined growth.

The table below presents the preliminary IHS ranking of the world’s top 20 semiconductor suppliers in 2013 and 2014 based on revenue.

2014-12-18_Semi_Ranking_Final

Troubles for consumer electronics and Japan

Semiconductor revenue in 2014 will grow in five of the six major semiconductor application end markets, i.e. data processing, wired communications, wireless communications, automotive electronics and industrial electronics. The only market segment experiencing a decline will be consumer electronics. Revenue will expand by double-digit percentages in four of the six markets.

Japan continues to struggle, and is the only worldwide region that will see a decline in semiconductor revenues this year. The other three geographies—Asia-Pacific, the Americas and the Europe, Middle East and Africa (EMEA) region—will see healthy growth. The world will be led by led by Asia-Pacific which will post an expected revenue increase of 12.5 percent.