Tag Archives: letter-ap-top

By Christian G. Dieseldorff, Industry Research & Statistics Group, SEMI

Semiconductor capital expenditures (without fabless and backend) are expected to slow in rate, but continue to grow by 5.8 percent in 2015 (over US$66 billion) and 2.5 percent in 2016 (over $68 billion), according to the May update of the SEMI World Fab Forecast report. A significant part of this capex is fab equipment spending.

Fab equipment spending is forecast to depart from the typical historic trend over the past 15 years of two years of spending growth followed by one year of decline.  Departing from the norm, equipment spending could grow every year for three years in a row: 2014, 2015, and 2016 (see Table 1).

Table 1: Fab Equipment Spending by Wafer Size

Table 1: Fab Equipment Spending by Wafer Size

At the end of May 2015, SEMI published its latest update to the World Fab Forecast report, reporting on more than 200 facilities with equipment spending in 2015, and more than 175 facilities projected to spend in 2016.

The report shows a large increase in spending for DRAM, more than 45 percent in 2015. Also, spending for 3D NAND is expected to increase by more than 60 percent in 2015 and more than 70 percent in 2016. The foundry sector is forecast to show 10 percent higher fab equipment spending in 2015, but may experience a decline in 2016.  Even with this slowdown, the foundry sector is expected to be the second largest in equipment spending, surpassed only by spending in the memory sector.

A weak first quarter of 2015 is dropping spending for the first half of 2015, but a stronger second half of 2015 is expected. Intel and TSMC reduced their capital expenditure plans for 2015, while other companies, especially memory, are expected to increase their spending.

The SEMI data details how this varies by company and fab.  For example, the report predicts increased fab equipment spending in 2015 by TSMC and Samsung. Samsung is the “wild card” on the table, with new fabs in Hwaseong, Line 17 and S3.  The World Fab Forecast report shows how Samsung is likely to ramp these fabs into 2016. In addition, Samsung is currently ramping a large fab in China for 3D NAND (VNAND) production.   Overall, the data show that Samsung is will likely spend a bit more for memory in 2015 and much more in 2016.  After two years of declining spending for System LSI, Samsung is forecast to show an increase in 2015, and especially for 2016.

Figure 1 depicts fab equipment spending by region for 2015.

Figure 1: Fab Equipment Spending in 2015 by Region; SEMI World Fab Forecast Report (May 2015).

Figure 1: Fab Equipment Spending in 2015 by Region; SEMI World Fab Forecast Report (May 2015).

In 2015, fab equipment spending by Taiwan and Korea together are expected to make up over 51 percent of worldwide spending, according to the SEMI report.  In 2011, Taiwan and Korea accounted for just 41 percent, and the highest spending region was the Americas, with 22 percent (now just 16 percent).  China’s fab spending is still dominated by non-Chinese companies such as SK Hynix and Samsung, but the impact of Samsung’s 3D NAND project in Xian is significant. China’s share for fab spending grew from 9 percent in 2011 to a projected 11 percent in 2015; because of Samsung’s fab in Xian, the share will grow to 13 percent in 2016.

Table 2 shows the share of the top two companies drive a region for fab equipment spending:

Table 2: Share of Fab Equipment Spending of Top Two Companies per Region

Table 2: Share of Fab Equipment Spending of Top Two Companies per Region

Over time, fab equipment spending has also shifted by technology node.  See Figure 2, where nodes have been grouped by size:

Figure 2: Fab Equipment Spending by Nodes (Grouped)

Figure 2: Fab Equipment Spending by Nodes (Grouped)

In 2011, most fab equipment spending was for nodes between 25nm to 49nm (accounting for $24 billion) while nodes with 24nm or smaller drove spending less than $7 billion. By 2015, spending flipped, with nodes equal or under 24nm accounting for $27 billion while spending on nodes between 25nm to 49nm dropped to $8 billion.  The SEMI World Fab data also predict more spending on nodes between 38nm to 79nm, due to increases in the 3DNAND sector in 2015 and accelerating in 2016 (not shown in the chart).

When is the next contraction?

As noted above, over the past 15 years the industry has never achieved three consecutive years of positive growth rates for spending.  2016 may be the year which deviates from this historic cycle pattern.  A developing hypothesis is that with more consolidation, fewer players compete for market positions, resulting in a more controlled spending environment with much lower volatility.

Learn more about the SEMI fab databases at: www.semi.org/MarketInfo/FabDatabase.

By Paula Doe, SEMI

As if scaling to 7nm geometries and going vertical with FinFETs, TSVs and other emerging technologies wasn’t challenge enough, the emerging market for connected smart devices will bring more changes to the semiconductor sector. And then there’s 3D printing looming in the wings.

Sometime between 2009 and 2010, there was a point of inflection, where the number of connected devices began outnumbering the planet’s human population. And these aren’t just laptops, mobile phones, and tablets – they also include sensors and everyday objects that were previously unconnected, says Tony Shakib, Cisco Systems VP IoE Vertical Solutions, who will talk about the impact of these changes on the chip industry at SEMICON West this summer in San Francisco.  Connected “things” may reach 25 to 50 billion by the year 2020, he projects. These connections of people, process, data and things will create opportunities for new revenue streams, new options for competitive advantage, and new operating models to drive both efficiency and value, potentially driving massive gains in efficiency, business growth, and quality of life, he suggests. “But as we connect the unconnected, this will require that we think differently about business strategy and IT, analytics, security, and more.”

Source: Cisco

Source: Cisco

Chip makers will need to provide easy-to-use IoT security for startups

One big change: some 50 percent of Internet of Things (IoT) solutions by 2017 will probably come from startups, according to Gartner’s projections.  “Whatever the exact percentage, the increased role of new and small players in the IoT edge devices will be a fundamental paradigm shift from the big companies that have conventionally dominated the electronics industry, says Gowri Chindalore, head of Technology and Business Strategy for Microcontrollers business group at Freescale, who will speak on the issue at SEMICON West’s “Monetizing the IoT: Opportunities and Challenges” session.  “And these startups’ knowledge of security is often very low.  So as IC makers we need to make it easy for them to do.” He suggests the best solution is to offer on-chip security features, such as secure storage, cryptographic accelerators, and tamper resistance mechanisms, and supplement them with a software dashboard that makes it easy for the systems maker to set up and enable the desired features appropriate for the application.  Though the encryption technology is very complex, by using library programs and selling in volume, the actual cost can probably be reduced to a few cents per chip.

Security for the internet will also improve markedly within several years as passwords are replaced by personal transmitters that automatically send secure codes to websites at log on. Similarly, local aggregator devices at the edge for all the IoT devices in the house or the factory will serve as the security gateway to screen users or devices by transmitted codes or biometric sensors. “We need proliferation of these security features into even all the benign IoT gadgets in the house to protect the network, but consumers will be willing to pay the small extra cost for security — especially after a few more highly publicized instances of hacking,” he notes.

Designers combining more IP blocks face challenges in reliability and verification

The key challenge across the board from the design side for successful IoT devices will be figuring out how to combine the right component capabilities of sensors and memory and processing and connectivity and size and power for a compelling application, and then making the right tradeoffs in the architecture to make it all work, explains Steve Carlson, VP marketing, Cadence Design Systems, another speaker at SEMICON West. “IP blocks will be especially useful for smaller companies to add functions without necessarily having the in house expertise,” he notes.  But combining the blocks will challenge many users by dramatically new issues of isolating noisy analog parts from the digital as they add RF and sensors that they haven’t had to deal with before, and all at near-threshold and ultralow power.  That will mean more issues with variation and reliability, and verification will increasingly need to include both hardware blocks and software together, so emulation will become more critical, he notes.

Fabs may need to deal with more diverse processes, but may improve productivity

“The IoT will drive demand for more IC manufacturing across a wide range of technologies, from the most advanced logic process to high voltage devices and MEMS, all with diverse requirements,” says Peter Huang, VP Field Technical Support, TSMC North America, another speaker. He notes that MEMS and other emerging devices, ranging from micro-lenses for machine vision to batteries to power wireless sensors, will require some unique tools and processes, and will be less easily scalable than CMOS.  Material handling and the need for isolated lines will create additional challenges. “Heterogeneous integration will require 2.5D packaging for both form factor and cost,” he suggests. “And the real challenge will be high volume manufacturing and IP interface at the package level.”

Though manufacturing equipment is already highly automated and interconnected, the availability of hundreds of low-cost, connected sensors may bring opportunities to increase tool automation and productivity, he adds.

IoT graphic 2

Compact integration of multiple chip and sensor technologies for IoT devices will demand more sophisticated system- in-package technology.  The new Apple Watch has 30 components in its core S1 SiP, all packed on to a 26mm x 28mm motherboard and overmolded with a conventional IC packaging resin compound. (from Chipworks)

Progress on technology for 3D printing of tooling and components

Then there’s the disruptive potential for 3D printing some of the tooling and components all along the supply chain to speed time to market, allow more customization, reduce weight and simplify dealing with legacy parts — if the process can meet the required quality and cost. Phillip Trinidad, president of service provider Proto Café, who has worked with semiconductor sector players,  argues that progress in optimizing designs now means additive manufacturing is increasingly becoming suitable not just for prototyping, but also for production of specialty parts in performance plastics.

In addition, there’s recent progress in 3D printing for challenging metal industrial parts, which will be addressed at SEMICON West “Factory of the Future: Disruptive Technologies from IoT to 3D Printing — Impact on the Semiconductor Manufacturing Sector” session. Ryan Dehoff, lead for Metal Additive Manufacture at Oakridge National Laboratory, will provide an update on the current state of the art for printing in metal, while Wayne King, director of the Initiative for Accelerated Certification of Additive Manufactured Metals, will talk about the progress on speeding qualification of the additive metal parts by modeling and inline process monitoring and control.

Along with the regular coverage of next-generation scaling technology, SEMICON West 2015 will also address the impact of the Internet of Things and 3D printing on manufacturing technology across the semiconductor supply chain, as well as related developments in MEMS, emerging non-volatile memory technology, and automotive and biomedical applications. Please visit www.semiconwest.org.

Different forecasting algorithms are highlighted and a framework is provided on how best to estimate product demand using a combination of qualitative and quantitative approaches.

BY JITESH SHAH, Integrated Device Technology, San Jose, CA

Nothing in the world of forecasting is more complex than predicting demand for semiconductors, but this is one business where accurate forecasting could be a matter of long-term survival. Not only will the process of forecasting help reduce costs for the company by holding the right amount of inventory in the channels and knowing what parts to build when but implementing a robust and self-adaptive system will also keep customers happy by providing them with products they need when they need. Other benefits include improved vendor engagements and optimal resource (labor and capital) allocation.

Talking about approaches…

There are two general approaches to forecasting a time-based event; qualitative approach and quantitative or a more numbers-based approach. If historical time-series data on the variable of interest is sketchy or if the event being forecasted is related to a new product launch, a more subjective or expert-based predictive approach is necessary, but we all intui- tively know that. New product introductions usually involve active customer and vendor engagements, and that allows us to have better control on what to build, when, and in what quantity. Even with that, the Bass Diffusion Model, a technique geared towards helping to predict sales for a new product category could be employed, but that will not be discussed in this context.

Now if data on past information on the forecasted variable is handy and quantifiable and it’s fair to assume that the pattern of the past will likely continue in the future, then a more quant-based, algorithmic and somewhat automated approach is almost a necessity.

But how would one go about deciding whether to use an automated approach to forecasting or a more expert-based approach? A typical semiconductor company’s products could be segmented into four quadrants (FIGURE 1), and deciding whether to automate the process of forecasting will depend on which quadrant the product fits best.

Figure 1

Figure 1

Time series modeling

Past shipment data over time for a product, or a group of products you are trying to forecast demand for is usually readily available, and that is generally the only data you need to design a system to automate the forecasting process. The goal is to discover a pattern in the historical, time-series data and extrapolate that pattern into the future. An ideal system should be built in such a way that it evolves, or self-adapts, and selects the “right” algorithm from the pre-built toolset if shipment pattern changes. A typical time-series forecasting model would have just two variables; an independent time variable and a dependent variable representing an event we are trying to forecast.

That event Qt (order, shipment, etc.) we are trying to forecast is more or less a function of the product’s life-cycle or trend, seasonality or business cycle and randomness, shown in the “white board” style illustration of FIGURE 2.

Figure 2

Figure 2

Trend and seasonality or business cycle are typically associated with longer-range patterns and hence are best suited to be used to make long-term forecasts. A shorter-term or horizontal pattern of past shipment data is usually random and is used to make shorter-term forecasts.

Forecasting near-term events

Past data exhibiting randomness with horizontal patterns can be reasonably forecasted using either a Naïve method or a simple averaging method. The choice between the two will depend on which one gives lower Mean Absolute Error (MAE) and Mean Absolute % Error (MAPE).

Naïve Method The sample table in FIGURE 3 shows 10 weeks’ worth of sales data. Using the Naïve approach, the forecasted value for the 2nd week is just what was shipped in the 1st week. The forecasted value for the 3rd week is the actual sales value in the 2nd week and so on. The difference between the actual value and the forecasted value represents the forecast error and the absolute value of that is used to calculate the total error. MAE is just the mean of total error. A similar approach is used to calculate MAPE, but now each individual error is divided by the actual sales volume to calculate % error, which are then summed and divided by the number of forecasted values to calculate MAPE.

Figure 3

Figure 3

Averaging Instead of using the last observed event and using that to forecast the next event, a better approach would be to use the mean of all past observations and use that as the next period’s forecast. For example, the forecasted value for the 3rd week is the mean of the 1st and 2nd week’s actual sales value. The forecasted value for the 4th week is the mean of the previous three actual sales values, and so on (FIGURE 4).

Figure 4

Figure 4

MAE and MAPE for the Naïve method are 4.56 and 19% respectively, and the same for the averaging method are 3.01 and 13% respectively. Right there, one can conclude that averaging is better than the simple Naïve approach.

Horizontal Pattern with Level Shift But what happens when there is a sudden shift (anticipated or not) in the sales pattern like the one shown in FIGURE 5?

Figure 5

Figure 5

The simple averaging approach needs to be tweaked to account for that, and that is where a moving average approach is better suited. Instead of averaging across the entire time series, only 2 or 3 or 4 recent time events are used to calculate the forecast value. How many time periods to use will depend on which one gives the smallest MAE and MAPE values and that can and should be parameterized and coded. The tables in FIGURE 6 compare the two approaches, and clearly the moving average approach seems to be a better fit in predicting future events.

Figure 6

Figure 6

Exponential Smoothing But oftentimes, there is a better approach, especially when the past data exhibits severe and random level shifts.

This approach is well suited for such situations because over time, the exponentially weighted moving average of the entire time series tends to deemphasize data that is older but still includes them and, at the same time, weighs recent observations more heavily. That relationship between the actual and forecasted value is shown in FIGURE 7.

Figure 7

Figure 7

Again, the lowest MAE and MAPE will help decide the optimal value for the smoothing constant and, as always, this can easily be coded based on the data you already have, and can be automatically updated as new data trickles in.

But based on the smoothing equation above, one must wonder how the entire time series is factored in when only the most recent actual and forecasted values are used as part of the next period’s forecast. The math in FIGURE 8 explains how.

Figure 8

Figure 8

The forecast for the second period is assumed to be the first observed value. The third period is the true derived forecast and with subsequent substitu- tions, one quickly finds out that the forecast for nth period is a weighted average of all previous observed events. And the weight ascribed to later events compared to the earlier events is shown in the plot in FIGURE 9.

Figure 9

Figure 9

Making longer term forecasts

A semiconductor product’s lifecycle is usually measured in months but surprisingly, there are quite a few products with lifespans measured in years, especially when the end applications exhibit long and growing adoption cycles. These products not only exhibit shorter-term randomness in time-series but show a longer-term seasonal / cyclical nature with growing or declining trend over the years.

The first step in estimating the forecast over the longer term is to smooth out some of that short- term randomness using the approaches discussed before. The unsmoothed and smoothed curves might resemble the plot in FIGURE 10.

Figure 10

Figure 10

Clearly, the data exhibits a long-term trend along with a seasonal or cyclical pattern that repeats every year, and Ordinary Least Square or OLS regression is the ideal approach to forming a function that will help estimate that trend and the parameters involved. But before crunching the numbers, the dataset has to be prepped to include a set of dichotomous variables representing the different intervals in that seasonal behavior. Since in this situation, that seasonality is by quarters representing Q1, Q2, Q3 and Q4, only three of them are included in the model. The fourth one, which is Q=2 in this case, forms the basis upon which to measure the significance of the other three quarters (FIGURE 11).

Figure 11

Figure 11

The functional form of the forecasted value by quarter looks something like what’s shown in FIGURE 12.

Figure 12

Figure 12

The intercept b0 moves up or down based on whether the quarter in question is Q2 or not. If b2, b3 and b4 are positive, Q2 will exhibit the lowest expected sales volume. The other three quarters will show increasing expected sales in line with the increase in the respective estimated parameter values. And this equation can be readily used to reasonably forecast an event a few quarters or a few years down the road.

So there you have it. This shows how easy it is to automate some features of the forecasting process, and the importance of building an intelligent, self- aware and adaptive forecasting system. The results will not only reduce cost but help refocus your supply-chain planning efforts on bigger and better challenges.

JITESH SHAH is a principal engineer with Integrated Device Technology, San Jose, CA

UPDATE:15 December 2015: Minor changes made to reflect correct ARM product nomenclature.

By Jeff Dorsch, Contributing Editor

Those 16-nanometer chips with FinFETs? Yesterday’s news. Taiwan Semiconductor Manufacturing wants you to know that they’re ready, willing, and able to help you design chips with 10-nanometer features.

The foundry presented Monday morning with its long-time partners, ARM Holdings and Synopsys, on its preparations for the 10nm process node.

20150608_072835 (640x360)

“The N10 design ecosystem is ready for customer design starts,” said Willy Chen, TSMC’s deputy director of Design & Technology Platform. He noted that TSMC has been collaborating with Synopsys for 15 years, while ARM and TSMC together offer “the most advanced ARM processor cores in the most advanced TSMC technology.”

Rob Aitken of ARM added, “10-nanometer enablement needs an ecosystem,” which the three companies are prepared to provide. He said ARM has “some cool things under development to make chip design faster,” without elaborating.

Haroon Gahur, principal design engineer at ARM, began the program by describing attributes of the ARM Cortex-A72 processor design, which he said consumes 75% less energy than previous ARM cores.

Joe Walston of Synopsys said ARM used the DC Graphical, IC Compiler I, and IC Compiler II tools from Synopsys in developing Cortex-A72, with signoff performed by PrimeTime SI. ARM’s Gahur noted that IC Compiler II provided a significant runtime advantage over its predecessor, IC Compiler I, by completing its run in five hours, compared with about 24 hours for IC Compiler I.

The program also featured Denny Liu, deputy general manager of Design Technology at MediaTek, who spoke of his company’s involvement with Synopsys and TSMC. He detailed MediaTek’s Helio X20, introduced last month, which is a tri-cluster mobile processor with 10 cores. MediaTek also employed IC Compiler II in designing the chip.

For all the 10nm talk, TSMC is hitting its stride with the N16FF+ process. Synopsys and TSMC announced Monday that the IC Compiler II place-and-route tool is certified for the foundry’s 16nm FinFET Plus process.

“The 16FF+ design flow is here,” TSMC’s Chen said.

The program finished with a presentation by Henry Sheng, group director of research and development at Synopsys, who noted that 90 percent of FinFET tapeouts are done with Synopsys place-and-route tools. Touting his company’s “healthy working relationship with TSMC,” Sheng said that emerging process nodes present a number of challenges, specifically new yield and manufacturing rules, process scaling, and new FinFET devices. Of FinFETs, he said, “These things are electrically different.”

Separately, Synopsys announced Sunday that it has agreed to acquire Atrenta, without disclosing financial terms. The transaction is expected to close this summer.

SEMI today announced the update of its World Fab Forecast report for 2015 and 2016. The report projects that semiconductor fab equipment spending (new, used, for Front End facilities) is expected to increase 11 percent (US$38.7 billion) in 2015 and another 5 percent ($40.7 billion) in 2016. Since February 2015, SEMI has made 282 updates to its detailed World Fab Forecast report, which tracks fab spending for construction and equipment, as well as capacity changes, and technology nodes transitions and product type changes by fab.   

Capital expenditure (capex without fabless and backend) by device manufacturers is forecast to increase almost 6 percent in 2015 and over 2 percent in 2016. Fab equipment spending is forecast to depart from the typical historic trend over the past 15 years of two years of spending growth followed by one of decline.  For the first time, equipment spending could grow every year for three years in a row: 2014, 2015, and 2016.

The SEMI World Fab Forecast Report, a “bottoms up” company-by-company and fab-by-fab approach, lists over 48 facilities making DRAM products and 32 facilities making NAND products. The report also monitors 36 construction projects with investments totaling over $5.6 billion in 2015 and 20 construction projects with investments of over $7.5 billion in 2016.  

According to the SEMI report, fab equipment spending in 2015 will be driven by Memory and Foundry ─ with Taiwan and Korea projected to become the largest markets for fab equipment at $10.6 billion and $9.3 billion, respectively. The market in the Americas is forecast to reach $6.1 billion, with Japan and China following at $4.5 and $4.4 billion, respectively. Europe/Mideast is predicted to invest $2.6 billion. The fab equipment market in South East Asia is expected to total $1.2 billion in 2015.

Learn more about the SEMI World Fab Forecast and plan to attend the SEMI/Gartner Market Symposium at SEMICON West 2015 on Monday, July 13 for an update on the semiconductor supply chain market outlook. In addition to presentations from Gartner analysts, Christian Dieseldorff of SEMI will present on “Trends and Outlook for Fabs and Fab Capacity” and Lara Chamness will present on “Semiconductor Wafer Fab Materials Market and Year-to-Date Front-End Equipment Trends.”   

Fab Equipment Spending
(for Front-End Facilities, includes new, used, in-house)

 

2014

(US$B)

2015

(US$B)

Year-over-Year

Americas

7.8

6.1

-22%

China

4.1

4.4

10%

Europe and Mideast

2.2

2.6

18%

Japan

3.8

4.5

17%

Korea

7.4

9.3

27%

SE Asia

1.1

1.2

2%

Taiwan

8.5

10.6

25%

Total

34.9

38.7

11%

Source: SEMI World Fab Forecast Reports (May 2015)Totals may not add due to rounding

By Lara Chamness, Industry Research and Statistics, SEMI

As the fabless business model has transformed the semiconductor manufacturing landscape, Taiwan and South Korea have undeniably grown into key semiconductor producing regions. However, it should be noted that North America is home to Intel, Texas Instruments, Micron, GLOBALFOUNDRIES, Freescale, Fairchild, Microchip, ON Semiconductor, significant operations of Samsung, and other manufacturers.  As a result, North America accounts for 15 percent (without discretes) of the global total installed fab capacity in 2014 according to the SEMI Fab database.

SEMI graphic 1--2014_Global_Fab_Capacities_0

Due to the presence of leading device manufacturers, North America represents a significant portion of the new equipment market; for the last two years, North America was the second largest market for semiconductor manufacturing equipment. In 2011, North America was the largest market for new equipment. While spending is expected to decline in the region this year, it is anticipated that device manufacturers in North America will still spend about $7 billion on new equipment this year.

SEMI graphic 2--Regional_Equipment_Markets_2010_2014

With such a large installed fab base, North America also claims a significant portion of the wafer fab materials market.  Comparing global fab capacity to global wafer fab market share, North America represents 18 percent of the Wafer Fab Materials market compared to 15 percent of global fab capacity. This is due to the advanced device manufacturing that occurs in the region, which requires more advanced materials which fetch higher average selling prices. The same phenomenon occurs in Taiwan and Europe as well.

SEMI graphic 3--Regional_Wafer_Fab_Materials_Markets

Even though the equipment market is expected to decline in North America this year, the Wafer Fab Materials Market is expected to increase amodest 3 percent. This is due to equipment purchased and installed last year becoming operational. The semiconductor manufacturing market in North America is still very much alive and innovating, whether it be for advanced manufacturing or chip design, companies in North America have proven adept at evolving with the industry.

Plan to attend the SEMI/Gartner Market Symposium at SEMICON West 2015 on Monday, July 13 for an update on the semiconductor market outlook.

By Paula Doe, SEMI

Ever growing volumes of data to be stored and accessed, and advancing process technologies for sophisticated control of deposition and etch in complex stacks of new materials, are creating a window of opportunity for an emerging variety of next-generation non-volatile memory technologies.  While flash memory goes vertical for  higher densities, resistive RAM and spin-transfer magnetic RAM  technologies are moving towards commercial manufacture for  initial applications in niches that demand a different mix of speed,  power and endurance than  flash or SRAM. This article delves into some of the topics that will be addressed at SEMICON West 2015.

Micron: Memory Needs to go Vertical

“Memory is going through a transformation, making it an exciting time to be in the sector, with both emerging opportunities and new challenges,” notes Naga Chandrasekaran, Micron Technology VP of process R&D, who will keynote the next-generation memory program at SEMICON West 2015.  As new applications in the connected world drive demand for increased storage, bandwidth, and smart memory, and as conventional planar memory scaling faces more challenges, memory suppliers across the industry face a transformation, requiring new emerging memory types and a transition from planar to vertical technology.

“Memory needs to go vertical to meet growing demands placed on performance, and that means a new set of process and equipment requirements,” says Chandrasekaran.  Scaling the vertical 3DNAND structures is no longer limited by the lithography, but instead is driven by the capability of the etch, film and characterization processes.  “Metrology and structure/defect characterization is a holdup for the entire sector, which is slowing down the cycle time for development,” he notes. “In addition, there are challenges in materials, structural scaling, equipment technology, and manufacturability on the new roadmap that need to be resolved.”

Everspin Targets ST-RAM on GLOBALFOUNDRIES’ 40nm 300mm Process in a Year

Everspin Technologies’ recently introduced 64Mb spin transfer torque MRAM makes a big jump in density over the company’s earlier 16Mb device, as switching the magnetization by a current of electrons of aligned spin allows much better selectivity than applying a magnetic field.  Manufacturing these spin-transfer devices has traditionally been a challenge, but the company claims it sees a clear roadmap to continue to increase the density. “We’re squeezing a 64Mb device on 90nm silicon out of the quarter-micron process equipment in our fab,” says VP of manufacturing Sanjeev Aggarwal, who will give an update on the technology at SEMICON West.  The company is in the process of transferring the technology to a 40nm process on 300mm wafers at partner GLOBALFOUNDRIES in the next year, to significantly reduce the cell size and spacing.

Aggarwal notes that the layers in the magnetic stack of the spin-transfer torque device (ST RAM) are similar in thickness to those of the earlier magnetic-field switched MRAM devices, which have already shipped some  50 million units. In the 28nm version of the ST-RAM, targeted for a couple of years out, the company plans to switch from an in-plane to a perpendicular structure, which will significantly improve efficiency to cut power consumption by an order of magnitude, though the material stack and processing will remain very similar.

Current deposition tools can provide the layer uniformity required for the many ultrathin layers of these magnetic stacks, and etching technology being developed with a vendor for cleanly removing these non-volatile magnetic material looks promising for 40nm, says Aggarwal. Key is the company’s IP for depositing the tunnel barrier MgO and for stopping the etch uniformly on the tunnel barrier when etching the magnetic stack. “These deposition and etch technologies should extend to 1Gb without much change, though at 16Gb we may need something new,” he adds. “In the next several years we will need help from vendors on better ways to clean up the etch residue, such as by ion milling after RIE, or encapsulating the stack to protect it before the next round of etching.”

Demand for the 64Mb ST-RAM is coming from buffer storage applications, such as high-end enterprise-class solid state drives, where an array of the fast-writing, non-volatile chips holds the data until it can be more permanently filed and stored, and where the high volumes of data require better endurance than flash,  reports Terry Hulett,  Evergreen VP Systems Engineering and GM Storage Solutions.  “As our products increase in density, we expect to serve the same function for bigger storage systems, like a whole rack of solid state drives,” he projects. The company also targets applications for potential power savings for the instant-on persistent memory, such as powering off the display buffer between every refresh cycles for mobile devices, or shutting down the server between operations.

Both Sanjeev Aggarwal (Everspin) and Naga Chandrasekaran (Micron Technology) will update SEMICON West attendees on the state of these emerging memory technologies in a TechXPOT.   In addition, Wei D. Lu (Crossbar), Robert Patti (Tezzaron), and Jim Handy (Objective Analysis) will provide analysis and updates at the July 14 event in San Francisco:

Crossbar Aims for Embedded ReRAM IP Blocks from Foundry by End of Year

ReRAM suppliers, meanwhile, argue that their technology potentially offers better prospects for scaling and lower costs than either flash or spin-based MRAM, although it is still a ways from a commercial volume process.   Crossbar Co-founder and chief scientist Wei Lu, who will also speak at SEMICON West, says the company plans to deliver its ReRAM technology to strategic partners as an IP block for embedded non-volatile memory on logic chips from a leading-edge manufacturing foundry by the end of the year.  The company’s approach stores data by changing the resistance by forming a conductive metallic bridge through a resistive layer of amorphous silicon sandwiched between two electrode layers.

Lu says the devices are being made with two-mask steps on top of the CMOS transistors in a leading foundry.  Key to improving performance to commercial levels and achieving very dense crossbar arrays, he notes, is the addition of a high speed selector device on top of the memory layer.  This layer blocks unwanted sneak currents at low voltages and turns on at the threshold level to enable formation of the conduction bridge. “It’s like a volatile RAM stacked on top of the ReRAM, with nanosecond recovery time,” he explains. “This brings the on/off selectivity up to 108.”

Initial target market is chip makers who want to embed nonvolatile memory directly in the logic fab, for low-power applications like the IoT, with faster speed and higher endurance than flash.  But ultimately the company targets the bigger market of stand-alone enterprise data storage with lower read and write latencies.  “We expect to offer Gigabit-level density at faster speed than NAND flash by around 2017,” claims Lu.  He figures ReRAM and STT RAM will both find their place in the more diverse memory market of the future, with SST RAM offering better endurance, and ReRAM offering higher density and lower cost.

Tezzaron Reports High ReRAM Yields from Repair and Remapping through Multilayer Stack

Tezzaron Semiconductor takes a different approach to ReRAM, storing data by moving oxygen vacancies instead of metal ions across the thin layers to change resistance.  CTO Robert Patti, another SEMICON West speaker, credits the Tezzaron fab’s ALD technology for the tight control of layer uniformity required to build its 16-tiers of ReRAM cells on top of a CMOS transistor tier from another foundry.  Controlling the chemistry of the layering and the reaction is a challenge, but the tiers allow dynamic repair and remapping of defective cells, which Patti claims can enable yields of up to 98%.  “The possibility to repair across the vertical structure makes defect density less of an issue, and lets us deal with materials and processes that are less mature,” he notes.

Patti says his company’s aerospace/military customers, who need a non-volatile option with better endurance than flash memory, will likely move to ReRAM within a couple of years.  Server makers are also starting to look at the potential for adding a new intermediate level of memory, between the solid state disk and the DRAM, which could potentially significantly improve server performance in analyzing big data by holding big chunks of data for faster access at lower power. It might also reduce system-level costs, although it will require changes in operating system architecture to use it effectively, and sophisticated programming algorithms to manage the memory to limit wear.  Demands on the intermediate storage memory should be limited enough that the ReRAM target endurance of 10cycles should be sufficient, though it remains lower than DRAM’s 1015.  If ReRAM endurance reaches 1012 cycles, the nonvolatile, instant-on memory could become a viable replacement for mobile memory, Patti suggests.

Vertical NAND is appealing because it’s more familiar, which has probably delayed interest in ReRAM.  But ReRAM has a smaller cell size so may ultimately be easier to scale and more cost effective,” argues Patti.

Costs Remain the Challenge

“The only thing that ultimately matters in memory is cost,” argues Objective Analysis analyst Jim Handy, another speaker, pointing out that the target aerospace and enterprise storage applications remain small markets, and volumes are not high enough yet to build up deep understanding of the new materials used, so there will be bumps in the road to come.  But as costs come down as MRAM and ReRAM scale to higher densities, he expects them to gradually take over more mainstream applications, starting with the highest cost memories, so first SRAM (especially SRAM with battery backup), then NOR flash, DRAM and finally NAND flash — perhaps by ~2023.  “We have been predicting that 2017 is the earliest we’ll see significant penetration of 3D NAND into the planar NAND market,” he notes. “And now that some suppliers are saying it will be 2017, it makes me think it may be longer.”

On July 14, all of these industry leaders will present at SEMICON West at the emerging memory technologies TechXPOT (www.semiconwest.org/node/13781). Register now and save $100 off registration.

By Douglas G. Sutherland and David W. Price

Author’s Note: This is the sixth in a series of 10 installments that explore fundamental truths about process control—defect inspection and metrology—for the semiconductor industry. Each article in this series introduces one of the 10 fundamental truths and highlights their implications. Within this article we will use the term inspection to imply either defect inspection or a parametric measurement such as film thickness or critical dimension (CD).

In previous installments we discussed capability, sampling, missed excursions, risk management and variability. Although all of these topics involve an element of time, in this paper we will discuss the importance of timeliness in more detail.

The sixth fundamental truth of process control for the semiconductor IC industry is:

Time is the Enemy of Profitability

There are three main phases to semiconductor manufacturing: research and development (R&D), ramp, and high volume manufacturing (HVM). All of them are expensive and time is a critical element in all three phases.

From a cash-flow perspective, R&D is the most difficult phase: the fab is spending hundreds of thousands of dollars every day on man power and capital equipment with no revenue from the newly developed products to offset that expense. In the ramp phase the fab starts to generate some revenue early on, but the yield and volume are still too low to offset the production costs. Furthermore, this revenue doesn’t even begin to offset the cost of R&D. It is usually not until the early stages of HVM that the fab has sufficient wafer starts and sufficient yield to start recovering the costs of the first two phases and begin making a profit. Figure 1 below shows the cumulative cash flow for the entire process.

Figure 1. The cumulative cash-flow as a function of time. In the R&D phase the cash-flow is negative but the slope of the curve turns positive in the ramp phase as revenues begin to build. The total costs do not turn positive until the beginning of high-volume manufacturing.

Figure 1. The cumulative cash-flow as a function of time. In the R&D phase the cash-flow is negative but the slope of the curve turns positive in the ramp phase as revenues begin to build. The total costs do not turn positive until the beginning of high-volume manufacturing.

What makes all of this even more challenging is that all the while, the prices paid for these new devices are falling. The time required from initial design to when the first chips reach the market is a critical parameter in the fab’s profitability. Figure 2 shows the actual decay curve for the average selling price (ASP) of memory chips from inception to maturity.

Figure 2.  Typical price decline curve for memory products in the first year after product introduction.   Similar trends can be seen for other devices types.

Figure 2. Typical price decline curve for memory products in the first year after product introduction. Similar trends can be seen for other devices types.

Consequently, while the fab is bleeding money on R&D, their ability to recoup those expenses is dwindling as the ASP steadily declines. Anything that can shorten the R&D and ramp phases shortens the time-to-market and allows fabs to realize the higher ASP shown on the left hand side of Figure 2.

From Figures 1 and 2 it is clear that even small delays in completing the R&D or ramp phases can make the difference between a fab that is wildly profitable and one that struggles just to break even. Those organizations that are the first to bring the latest technology to market reap the majority of the reward. This gives them a huge head start—in terms of both time and money—in the development of the next technology node and the whole cycle then repeats itself.

Process control is like a window that allows you to see what is happening at various stages of the manufacturing cycle. Without this, the entire exercise from R&D to HVM would be like trying to build a watch while wearing a blindfold. This analogy is not as far-fetched as it may seem. The features of integrated circuits are far too small to be seen and even when inspections are made, they are usually only done on a small percentage of the total wafers produced. For parametric measurements (films, CD and overlay) measurements are performed only on an infinitesimal percentage of the total transistors on each of the selected wafers. For the vast majority of time, the fab manager truly is blind. Parametric measurements and defect inspection are brief moments when ‘the watch maker’ can take off the blindfold, see the fruits of their labor and make whatever corrections may be required.

As manufacturing processes become more complex with multiple patterning, pitch splitting and other advanced patterning techniques, the risk of not yielding in a timely fashion is higher than ever. Having more process control steps early in the R&D and ramp phases increases the number of windows through which you can see how the process is performing. Investing in the highest quality process control tools improves the quality of these windows. A window that distorts the view—an inspection tool with poor capture rate or a parametric tool with poor accuracy—may be worse than no window at all because it wastes time and may provide misleading data. An effective process control strategy, consisting of the right tools, the right recipes and the right sampling all at the right steps, can significantly reduce the R&D and ramp times.

On a per wafer basis, the amount of process control should be highest in the R&D phase when the yield is near zero and there are more problems to catch and correct. Resolving a single rate-limiting issue in this phase with two fewer cycles of learning—approximately one month—can pay for a significant portion of the total budget spent on process control.

After R&D, the ramp phase is the next most important stage requiring focused attention with very high sampling rates. It’s imperative that the yield be increased to profitable levels as quickly as possible and you can’t do this while blindfolded.

Finally, in the HVM phase an effective process control strategy minimizes risk by discovering yield limiting problems (excursions) in a timely manner.

It’s all about time, as time is money. 

References:

1)     Process Watch: You Can’t Fix What You Can’t Find, Solid State Technology, July 2014

2)     Process Watch: Sampling Matters, Semiconductor Manufacturing and Design, September 2014

3)     Process Watch: The Most Expensive Defect, Solid State Technology, December 2014

4)     Process Watch: Fab Managers Don’t Like Surprises, Solid State Technology, December 2014

5)     Process Watch: Know Your Enemy, Solid State Technology, March 2015 

About the authors:

Dr. David W. Price is a Senior Director at KLA-Tencor Corp. Dr. Douglas Sutherland is a Principal Scientist at KLA-Tencor Corp. Over the last 10 years, Dr. Price and Dr. Sutherland have worked directly with over 50 semiconductor IC manufacturers to help them optimize their overall inspection strategy to achieve the lowest total cost. This series of articles attempts to summarize some of the universal lessons they have observed through these engagements.

 

By Zvi Or-Bach, President and CEO of MonolithIC 3D Inc.

Scaling is now bifurcating – some scaling on with 28/22nm, while other push below 14nm.

In his famous 1965 paper Cramming more components onto integrated circuits, Moore wrote: “The complexity for minimum component costs has increased at a rate of roughly a factor of two per year”. Dimensional scaling below 28nm will only increase the ‘component cost’ as we described in Moore’s Law has stopped at 28nm and is detailed in the following tables published recently by IBS.

Fig 1

 

While there is still a strong effort behind dimensional scaling to 14, 10 and 7nm – and possibly even beyond, a new scaling effort is emerging to reduce the ‘component costs’ and increase integration yet still utilize the 28 nm process node. The semiconductor industry is now going through a bifurcation phase.

This new emerging trend of scaling by factors other than dimensional scaling was recognized early-on by Gordon Moore and was detailed in his 1975 famous IEDM paper “Progress in digital integrated electronics.”. In that paper Moore updated the time scaling rate to every two years and suggested the following factors are helping to drive scaling forward:

  1.  “Die size” – “larger chip area”
  2. “Dimension” – “higher density” and “finer geometries”
  3. “Device and circuit cleverness”

A fourth factor should have been added to the list above – improvement in manufacturing efficiency, which ensued from the increase in wafer sizes from 4” to 5” and all the way to the 12” of today, and many other manufacturing improvements.

In the past, all of these factors were aggregated into dimensional scaling as old fabs got obsolete and improvements were implemented predominantly in the new emerging node. Nowadays, as dimensional scaling has reached its diminishing returns phase, we can see a very diverse adaption of technology improvments.

In his keynote presentation at the 2014 Synopsys user group meeting, Art De Geus, Synopsys CEO, presented multiple slides to illustrate the value of Synopsys’ newer tools to improve older node design effectiveness. The following is one of them:

Fig 2

AMD’s recent presentation at ISSCC 2015 clearly illustrates this point by showing device improvements while still staying at the same 28 nm process node, see slide below. As could be seen, major improvements in power, yield, and performance are possible over time without changing the technology node. AMD’s President & CEO Dr. Lisa Su presentation in 2015 Semicon China, reiterated AMD’s technology progress within the same 28nm technology node:

Fig 3

Even more significant would be the adoption of a breakthrough technology. A good example is the SRAM technology developed by Zeno Semiconductor, which has recently been validated on a 28nm process. This new SRAM technology replaces the 6T SRAM bit cell with 1T SRAM (true SRAM – no refresh is needed) providing significant reduction of ‘component costs’ as is illustrated in the following two slides.

Fig 4

Fig 5

This new industry trend was nicely articulated by Kelvin Low of Samsung covered in “Samsung Describes Road to 14nm, FinFETs a challenge, FD-SOI an alternative.” Quoting: “Samsung spent several years developing its 14nm technology and debating which process node it would invest in after 28nm. Low expects that 28nm will still be a popular process node for years to come because of its price …The cost per transistor has increased in 14nm FinFETs and will continue to do so, Low said, so an alternative technology such as 28nm SOI is necessary”. TSMC too is now spending on new R&D efforts to improve their 28 nm as was presented in TSMC 2015 Technology Symposium, introducing new 28nm processes, 28HPC+ and 28ULP. 28HPC+ is for high performance, a speed gain of about 15% for the same leakage, or a reduction of 30-50% in leakage for the same speed. The 28ULP (for ultra-low power) process is for IoT applications with a lower operating voltage of 0.7V (versus 0.9V for 28HPC+). And also new standard cell libraries were developed for this process with 9 and 7 track libraries (compared to 12T/9T before).

“Device and circuit cleverness” as a factor will never stop; however, it is made of a series of individual improvements that will not be enough to sustain a long-term scaling path for the industry. An alternative long-term path will be “Die size” – “larger chip area,” which is effectively monolithic 3D, and manufacturing efficiency, which will have an important role in monolithic 3D.

And who is better to call it than Mark Bohr of Intel? In a recent blog piece “Intel predicts Moore’s Law to last another 10 years” Bohr is quoted predicting “that Moore’s Law will not come to an abrupt halt, but will morph and evolve and go in a different direction, such as scaling density by the 3D stacking of components rather than continuing to reduce transistor size.”

And this is also visible in the marketplace by the industry-wide adoption of 3D NAND devices that Samsung started to mass-produce in 2014, and followed with a second generation 32 layer-stack device this year, and forecasting going to ~ 100 layers, as illustrated in their slide:

Fig 6

 

In the recent webcast “Monolithic 3D: The Most Effective Path for Future IC Scaling,” Dr. Maud Vinet of CEA Leti presented their “CoolCube” monolithic 3D technology, which was followed by our own, i.e., MonolithIC 3D, presentation. An important breakthrough presented by us was a monolithic 3D process flow that does not require changes in transistor-formation process and could be easily integrated by any fab at any process node.

Finally, I’d like to quote Mark Bohr again as we reported in our blog “Intel Calls for 3D IC”: “heterogeneous integration enabled by 3D IC is an increasingly important part of scaling” as was presented in ISSCC 2015.

Fig 7

 

This is illustrated nicely by the following figure presented by Qualcomm in their ISPD ‘15 paper titled “3D VLSI: A Scalable Integration Beyond 2D.”

Fig 8

 

In summary, the general promise of Moore’s Law is not going to end any time soon. Yet it is not going to be the simple brute-force x0.7 dimensional scaling that dominated the industry for the last 5 decades. Quoting Mark Bohr again, it “will morph and evolve and go in a different direction, such as scaling density by the 3D stacking of components rather than continuing to reduce transistor size.”

P.S. –

A good conference to learn about these new scaling technologies is the IEEE S3S ‘15, in Sonoma, CA, on October 5th thru 8th, 2015. CEA Leti is scheduled to give an update on their CoolCube program and three leading researchers from Berkeley, Stanford and Taiwan’s NLA Lab will present their work on advanced monolithic 3D integration technologies.

IBM today announced a significant milestone in the development of silicon photonics technology, which enables silicon chips to use pulses of light instead of electrical signals over wires to move data at rapid speeds and longer distances in future computing systems.

For the first time, IBM engineers have designed and tested a fully integrated wavelength multiplexed silicon photonics chip, which will soon enable manufacturing of 100 Gb/s optical transceivers. This will allow datacenters to offer greater data rates and bandwidth for cloud computing and Big Data applications.

“Making silicon photonics technology ready for widespread commercial use will help the semiconductor industry keep pace with ever-growing demands in computing power driven by Big Data and cloud services,” said Arvind Krishna, senior vice president and director of IBM Research. “Just as fiber optics revolutionized the telecommunications industry by speeding up the flow of data — bringing enormous benefits to consumers — we’re excited about the potential of replacing electric signals with pulses of light. This technology is designed to make future computing systems faster and more energy efficient, while enabling customers to capture insights from Big Data in real time.”

Silicon photonics uses tiny optical components to send light pulses to transfer large volumes of data at very high speed between computer chips in servers, large datacenters, and supercomputers, overcoming the limitations of congested data traffic and high-cost traditional interconnects. IBM’s breakthrough enables the integration of different optical components side-by-side with electrical circuits on a single silicon chip using sub-100nm semiconductor technology.

IBM’s silicon photonics chips uses four distinct colors of light travelling within an optical fiber, rather than traditional copper wiring, to transmit data in and around a computing system. In just one second, this new transceiver is estimated to be capable of digitally sharing 63 million tweets or six million images, or downloading an entire high-definition digital movie in just two seconds.

The technology industry is entering a new era of computing that requires IT systems and cloud computing services to process and analyze huge volumes of Big Data in real time, both within datacenters and particularly between cloud computing services. This requires that data be rapidly moved between system components without congestion. Silicon photonics greatly reduces data bottlenecks inside of systems and between computing components, improving response times and delivering faster insights from Big Data.

IBM’s new CMOS Integrated Nano-Photonics Technology will provide a cost-effective silicon photonics solution by combining the vital optical and electrical components, as well as structures enabling fiber packaging, on a single silicon chip. Manufacturing makes use of standard fabrication processes at a silicon chip foundry, making this technology ready for commercialization.

Silicon photonics technology leverages the unique properties of optical communications, which include transmission of high-speed data over kilometer-scale distances, and the ability to overlay multiple colors of light within a single optical fiber to multiply the data volume carried, all while maintaining low power consumption. These characteristics combine to enable rapid movement of data between computer chips and racks within servers, supercomputers, and large datacenters, in order to alleviate the limitations of congested data traffic produced by contemporary interconnect technologies.

Silicon photonics will transform future datacenters

By moving information via pulses of light through optical fibers, optical interconnects are an integral part of contemporary computing systems and next generation datacenters. Computer hardware components, whether a few centimeters or a few kilometers apart, can seamlessly and efficiently communicate with each other at high speeds using such interconnects. This disaggregated and flexible design of datacenters will help reduce the cost of space and energy, while increasing performance and analysis capabilities for users ranging from social media companies to financial services to universities.

Most of the optical interconnect solutions employed within datacenters as of today are based upon vertical cavity surface emitting laser (VCSEL) technology, where the optical signals are transported via multimode optical fiber. Demands for increased distance and data rate between ports, due to cloud services for example, are driving the development of cost-effective single-mode optical interconnect technologies, which can overcome the bandwidth-distance limitations inherent to multimode VCSEL links.

IBM’s CMOS Integrated Nano-Photonics Technology provides an economical solution to extend the reach and data rates of optical links. The essential parts of an optical transceiver, both electrical and optical, can be combined monolithically on one silicon chip, and are designed to work with with standard silicon chip manufacturing processes.

IBM engineers in New York and Zurich, Switzerland and IBM Systems Unit have demonstrated a reference design targeting datacenter interconnects with a range up to two kilometers. This chip demonstrates transmission and reception of high-speed data using four laser “colors,” each operating as an independent 25 Gb/s optical channel. Within a full transceiver design, these four channels can be wavelength multiplexed on-chip to provide 100 Gb/s aggregate bandwidth over a duplex single-mode fiber, thus minimizing the cost of the installed fiber plant within the datacenter.

Further details will be presented by IBM at the 2015 Conference on Lasers and Electro Optics (May 10-15) in San Jose, California, during the invited presentation entitled “Demonstration of Error Free Operation Up To 32 Gb/s From a CMOS Integrated Monolithic Nano-Photonic Transmitter,” by Douglas M. Gill, Chi Xiong, Jonathan E. Proesel, Jessie C. Rosenberg, Jason Orcutt, Marwan Khater, John Ellis-Monaghan, Doris Viens, Yurii Vlasov, Wilfried Haensch, and William M. J. Green.

IBM Research has been leading the development of silicon photonics for more than a decade, announcing a series of technology milestones beginning in 2006. Silicon photonics is among the efforts of IBM’s $3 billion investment to push the limits of chip technology to meet the emerging demands of cloud and Big Data systems.