Category Archives: LEDs

By Christian G. Dieseldorff, Industry Research & Statistics Group, SEMI

Semiconductor capital expenditures (without fabless and backend) are expected to slow in rate, but continue to grow by 5.8 percent in 2015 (over US$66 billion) and 2.5 percent in 2016 (over $68 billion), according to the May update of the SEMI World Fab Forecast report. A significant part of this capex is fab equipment spending.

Fab equipment spending is forecast to depart from the typical historic trend over the past 15 years of two years of spending growth followed by one year of decline.  Departing from the norm, equipment spending could grow every year for three years in a row: 2014, 2015, and 2016 (see Table 1).

Table 1: Fab Equipment Spending by Wafer Size

Table 1: Fab Equipment Spending by Wafer Size

At the end of May 2015, SEMI published its latest update to the World Fab Forecast report, reporting on more than 200 facilities with equipment spending in 2015, and more than 175 facilities projected to spend in 2016.

The report shows a large increase in spending for DRAM, more than 45 percent in 2015. Also, spending for 3D NAND is expected to increase by more than 60 percent in 2015 and more than 70 percent in 2016. The foundry sector is forecast to show 10 percent higher fab equipment spending in 2015, but may experience a decline in 2016.  Even with this slowdown, the foundry sector is expected to be the second largest in equipment spending, surpassed only by spending in the memory sector.

A weak first quarter of 2015 is dropping spending for the first half of 2015, but a stronger second half of 2015 is expected. Intel and TSMC reduced their capital expenditure plans for 2015, while other companies, especially memory, are expected to increase their spending.

The SEMI data details how this varies by company and fab.  For example, the report predicts increased fab equipment spending in 2015 by TSMC and Samsung. Samsung is the “wild card” on the table, with new fabs in Hwaseong, Line 17 and S3.  The World Fab Forecast report shows how Samsung is likely to ramp these fabs into 2016. In addition, Samsung is currently ramping a large fab in China for 3D NAND (VNAND) production.   Overall, the data show that Samsung is will likely spend a bit more for memory in 2015 and much more in 2016.  After two years of declining spending for System LSI, Samsung is forecast to show an increase in 2015, and especially for 2016.

Figure 1 depicts fab equipment spending by region for 2015.

Figure 1: Fab Equipment Spending in 2015 by Region; SEMI World Fab Forecast Report (May 2015).

Figure 1: Fab Equipment Spending in 2015 by Region; SEMI World Fab Forecast Report (May 2015).

In 2015, fab equipment spending by Taiwan and Korea together are expected to make up over 51 percent of worldwide spending, according to the SEMI report.  In 2011, Taiwan and Korea accounted for just 41 percent, and the highest spending region was the Americas, with 22 percent (now just 16 percent).  China’s fab spending is still dominated by non-Chinese companies such as SK Hynix and Samsung, but the impact of Samsung’s 3D NAND project in Xian is significant. China’s share for fab spending grew from 9 percent in 2011 to a projected 11 percent in 2015; because of Samsung’s fab in Xian, the share will grow to 13 percent in 2016.

Table 2 shows the share of the top two companies drive a region for fab equipment spending:

Table 2: Share of Fab Equipment Spending of Top Two Companies per Region

Table 2: Share of Fab Equipment Spending of Top Two Companies per Region

Over time, fab equipment spending has also shifted by technology node.  See Figure 2, where nodes have been grouped by size:

Figure 2: Fab Equipment Spending by Nodes (Grouped)

Figure 2: Fab Equipment Spending by Nodes (Grouped)

In 2011, most fab equipment spending was for nodes between 25nm to 49nm (accounting for $24 billion) while nodes with 24nm or smaller drove spending less than $7 billion. By 2015, spending flipped, with nodes equal or under 24nm accounting for $27 billion while spending on nodes between 25nm to 49nm dropped to $8 billion.  The SEMI World Fab data also predict more spending on nodes between 38nm to 79nm, due to increases in the 3DNAND sector in 2015 and accelerating in 2016 (not shown in the chart).

When is the next contraction?

As noted above, over the past 15 years the industry has never achieved three consecutive years of positive growth rates for spending.  2016 may be the year which deviates from this historic cycle pattern.  A developing hypothesis is that with more consolidation, fewer players compete for market positions, resulting in a more controlled spending environment with much lower volatility.

Learn more about the SEMI fab databases at: www.semi.org/MarketInfo/FabDatabase.

Today, SEMI announced that SEMICON Europa 2015, the region’s largest microelectronics manufacturing event, will offer new themes to support the semiconductor industry’’s development in Europe. The exposition and conferences will take place in Dresden on October 6-8. SEMICON Europa will feature over 100 hours of technical sessions and presentations addressing the critical issues and challenges facing the microelectronics industries. Registration for visitors and conference participants opens today.

For the first time, SEMICON Europa will offer specific sessions on microelectronics in the automotive and medical technology segments as well as events focusing on microelectronics for the smart factory of the future. “SEMICON Europa will be the forum bringing semiconductor technology in direct contact with the industries that are driving chip usage the most right now,” explains Stephan Raithel, managing director in Berlin at SEMI. “The largest growth rates over the next few years will be in the automotive industry, medical technology, and communication technology – exactly the application areas that we are focusing on at SEMICON Europa this year.”

Materials and equipment for the semiconductor industry will remain the core of SEMICON Europa 2015. However, programs will also include new areas including imaging, low power, and power electronics. In addition, Plastic Electronics 2015, the world’s largest conference with exhibitions in the field of flexible, large-scale and organic electronics, will complement SEMICON Europa. In all, the SEMICON Europa 2015 conference program includes over 40 trade conferences and high-quality discussion forums.

At the Fab Managers Forum, Reinhard Ploss, CEO of Infineon Technologies AG, and Hans Vloeberghs, European Business director of Fujifilm, will be the keynote speakers, focusing on how the European semiconductor industry can improve its competitiveness. The Semiconductor Technology Conference, focusing on productivity enhancements for future advanced technology nodes in semiconductor technology, features keynote speakers Peter Jenkins, VP of Marketing at ASML; Niall MacGearailt, Advanced Manufacturing Research program manager at Intel; and Paul Farrar, GM for the consortium G450C at SUNY Polytechnic Institute’s Colleges of Nanoscale Science and Engineering, which works on creating the conditions necessary for producing chips on 450mm wafers.

New at SEMICON Europa 2015: SEMI and its German partner HighTech Startbahn are expanding the Innovation Village. Innovation Village is the ideal forum for European startups and high-growth businesses in search of investors. Sixty start-up/young businesses will have the opportunity to present their ideas and their business model to potential investors and industry partners. The application deadline is June 15.

Over 400 exhibitors at SEMICON Europa represent the suppliers of Europe’s leading microelectronics companies. From wafers to the finished product and every element in between, SEMICON Europa displays the best of the microelectronics manufacturing. The exhibitor markets include semiconductors, MEMS, consumables, device fabrication, wafer processing, materials, assembly and packaging, process, test, and components.

To learn more (exhibition or registration), please visit: www.semiconeuropa.org/en.

Different forecasting algorithms are highlighted and a framework is provided on how best to estimate product demand using a combination of qualitative and quantitative approaches.

BY JITESH SHAH, Integrated Device Technology, San Jose, CA

Nothing in the world of forecasting is more complex than predicting demand for semiconductors, but this is one business where accurate forecasting could be a matter of long-term survival. Not only will the process of forecasting help reduce costs for the company by holding the right amount of inventory in the channels and knowing what parts to build when but implementing a robust and self-adaptive system will also keep customers happy by providing them with products they need when they need. Other benefits include improved vendor engagements and optimal resource (labor and capital) allocation.

Talking about approaches…

There are two general approaches to forecasting a time-based event; qualitative approach and quantitative or a more numbers-based approach. If historical time-series data on the variable of interest is sketchy or if the event being forecasted is related to a new product launch, a more subjective or expert-based predictive approach is necessary, but we all intui- tively know that. New product introductions usually involve active customer and vendor engagements, and that allows us to have better control on what to build, when, and in what quantity. Even with that, the Bass Diffusion Model, a technique geared towards helping to predict sales for a new product category could be employed, but that will not be discussed in this context.

Now if data on past information on the forecasted variable is handy and quantifiable and it’s fair to assume that the pattern of the past will likely continue in the future, then a more quant-based, algorithmic and somewhat automated approach is almost a necessity.

But how would one go about deciding whether to use an automated approach to forecasting or a more expert-based approach? A typical semiconductor company’s products could be segmented into four quadrants (FIGURE 1), and deciding whether to automate the process of forecasting will depend on which quadrant the product fits best.

Figure 1

Figure 1

Time series modeling

Past shipment data over time for a product, or a group of products you are trying to forecast demand for is usually readily available, and that is generally the only data you need to design a system to automate the forecasting process. The goal is to discover a pattern in the historical, time-series data and extrapolate that pattern into the future. An ideal system should be built in such a way that it evolves, or self-adapts, and selects the “right” algorithm from the pre-built toolset if shipment pattern changes. A typical time-series forecasting model would have just two variables; an independent time variable and a dependent variable representing an event we are trying to forecast.

That event Qt (order, shipment, etc.) we are trying to forecast is more or less a function of the product’s life-cycle or trend, seasonality or business cycle and randomness, shown in the “white board” style illustration of FIGURE 2.

Figure 2

Figure 2

Trend and seasonality or business cycle are typically associated with longer-range patterns and hence are best suited to be used to make long-term forecasts. A shorter-term or horizontal pattern of past shipment data is usually random and is used to make shorter-term forecasts.

Forecasting near-term events

Past data exhibiting randomness with horizontal patterns can be reasonably forecasted using either a Naïve method or a simple averaging method. The choice between the two will depend on which one gives lower Mean Absolute Error (MAE) and Mean Absolute % Error (MAPE).

Naïve Method The sample table in FIGURE 3 shows 10 weeks’ worth of sales data. Using the Naïve approach, the forecasted value for the 2nd week is just what was shipped in the 1st week. The forecasted value for the 3rd week is the actual sales value in the 2nd week and so on. The difference between the actual value and the forecasted value represents the forecast error and the absolute value of that is used to calculate the total error. MAE is just the mean of total error. A similar approach is used to calculate MAPE, but now each individual error is divided by the actual sales volume to calculate % error, which are then summed and divided by the number of forecasted values to calculate MAPE.

Figure 3

Figure 3

Averaging Instead of using the last observed event and using that to forecast the next event, a better approach would be to use the mean of all past observations and use that as the next period’s forecast. For example, the forecasted value for the 3rd week is the mean of the 1st and 2nd week’s actual sales value. The forecasted value for the 4th week is the mean of the previous three actual sales values, and so on (FIGURE 4).

Figure 4

Figure 4

MAE and MAPE for the Naïve method are 4.56 and 19% respectively, and the same for the averaging method are 3.01 and 13% respectively. Right there, one can conclude that averaging is better than the simple Naïve approach.

Horizontal Pattern with Level Shift But what happens when there is a sudden shift (anticipated or not) in the sales pattern like the one shown in FIGURE 5?

Figure 5

Figure 5

The simple averaging approach needs to be tweaked to account for that, and that is where a moving average approach is better suited. Instead of averaging across the entire time series, only 2 or 3 or 4 recent time events are used to calculate the forecast value. How many time periods to use will depend on which one gives the smallest MAE and MAPE values and that can and should be parameterized and coded. The tables in FIGURE 6 compare the two approaches, and clearly the moving average approach seems to be a better fit in predicting future events.

Figure 6

Figure 6

Exponential Smoothing But oftentimes, there is a better approach, especially when the past data exhibits severe and random level shifts.

This approach is well suited for such situations because over time, the exponentially weighted moving average of the entire time series tends to deemphasize data that is older but still includes them and, at the same time, weighs recent observations more heavily. That relationship between the actual and forecasted value is shown in FIGURE 7.

Figure 7

Figure 7

Again, the lowest MAE and MAPE will help decide the optimal value for the smoothing constant and, as always, this can easily be coded based on the data you already have, and can be automatically updated as new data trickles in.

But based on the smoothing equation above, one must wonder how the entire time series is factored in when only the most recent actual and forecasted values are used as part of the next period’s forecast. The math in FIGURE 8 explains how.

Figure 8

Figure 8

The forecast for the second period is assumed to be the first observed value. The third period is the true derived forecast and with subsequent substitu- tions, one quickly finds out that the forecast for nth period is a weighted average of all previous observed events. And the weight ascribed to later events compared to the earlier events is shown in the plot in FIGURE 9.

Figure 9

Figure 9

Making longer term forecasts

A semiconductor product’s lifecycle is usually measured in months but surprisingly, there are quite a few products with lifespans measured in years, especially when the end applications exhibit long and growing adoption cycles. These products not only exhibit shorter-term randomness in time-series but show a longer-term seasonal / cyclical nature with growing or declining trend over the years.

The first step in estimating the forecast over the longer term is to smooth out some of that short- term randomness using the approaches discussed before. The unsmoothed and smoothed curves might resemble the plot in FIGURE 10.

Figure 10

Figure 10

Clearly, the data exhibits a long-term trend along with a seasonal or cyclical pattern that repeats every year, and Ordinary Least Square or OLS regression is the ideal approach to forming a function that will help estimate that trend and the parameters involved. But before crunching the numbers, the dataset has to be prepped to include a set of dichotomous variables representing the different intervals in that seasonal behavior. Since in this situation, that seasonality is by quarters representing Q1, Q2, Q3 and Q4, only three of them are included in the model. The fourth one, which is Q=2 in this case, forms the basis upon which to measure the significance of the other three quarters (FIGURE 11).

Figure 11

Figure 11

The functional form of the forecasted value by quarter looks something like what’s shown in FIGURE 12.

Figure 12

Figure 12

The intercept b0 moves up or down based on whether the quarter in question is Q2 or not. If b2, b3 and b4 are positive, Q2 will exhibit the lowest expected sales volume. The other three quarters will show increasing expected sales in line with the increase in the respective estimated parameter values. And this equation can be readily used to reasonably forecast an event a few quarters or a few years down the road.

So there you have it. This shows how easy it is to automate some features of the forecasting process, and the importance of building an intelligent, self- aware and adaptive forecasting system. The results will not only reduce cost but help refocus your supply-chain planning efforts on bigger and better challenges.

JITESH SHAH is a principal engineer with Integrated Device Technology, San Jose, CA

Over the past 15 years, strong growth in optoelectronics has been fueled by several different product categories at different times.  Laser transmitters for high-speed optical networks were a major growth driver before the “dot.com” implosion in 2001. Image sensors and lamp devices (primarily light-emitting diodes—LEDs) became star performers in the last decade, and more recently, laser transmitters have re-emerged as a major growth driver in optoelectronics.  IC Insights believes these three products will be key contributors to overall growth of the optoelectronics market through 2019 (Figure 1).

optoelectronics snapshot

 

Through 2019, IC Insights sees these three trends driving optoelectronics market growth:

•    High-brightness LEDs (HB-LEDs) have reached the luminous efficacy of fluorescent lights and are in a position to be a major factor in the $100 billion global lighting industry.  Since the end of the last decade, strong sales of HB-LEDs have gone into backlighting systems for cellphones, tablets, LCD TVs, and computer displays, but this growth has greatly eased with penetration rates reaching nearly 100 percent in these applications.  With production capacity growing, HB-LED suppliers are concentrating on cutting costs and improving the overall quality of light for general illumination products in homes, businesses, buildings, outdoor lighting, and other applications, such as automotive headlamps and digital signs. HB-LED 2014 -2019 CAGR forecast (sales):  9.7 percent.

•    CMOS image sensors have entered into another wave of strong sales growth as digital imaging moves into new automotive-safety systems, medical equipment, video security and surveillance networks, human-recognition user interfaces, wearable body cameras, and other embedded applications beyond camera phones and stand-alone digital cameras. CMOS image sensor 2014-2019 CAGR forecast (sales):  11.1 percent.

•    Fiber-optic laser transmitters will continue to be the fastest growing optoelectronics product category as network operators struggle to keep up with huge increases in Internet traffic, video streaming and downloads, cloud-computing services, and the potential for billions of new connections in the Internet of Things (IoT).  Laser transmitter 2014-2019 CAGR forecast (sales):  15.3 percent.

A Si quantum dot (QD)-based hybrid inorganic/organic light-emitting diode (LED) that exhibits white-blue electroluminescence has been fabricated by Professor Ken-ichi SAITOW (Natural Science Center for Basic Research and Development, Hiroshima University), Graduate student Yunzi XIN (Graduate School of Science, Hiroshima University), and their collaborators.

Professor Ken-ichi Saitow, Natural Science Center for Basic Research and Development, Hiroshima University and Graduate student Yunzi Xin, Graduate School of Science, Hiroshima University, have fabricated an Si QD hybrid LED. CREDIT: Natural Science Center for Basic Research and Development, Hiroshima University

Professor Ken-ichi Saitow, Natural Science Center for Basic Research and Development, Hiroshima University and Graduate student Yunzi Xin, Graduate School of Science, Hiroshima University, have fabricated an Si QD hybrid LED.
CREDIT: Natural Science Center for Basic Research and Development, Hiroshima University

 

A hybrid LED is expected to be a next-generation illumination device for producing flexible lighting and display, and this is achieved for the Si QD-based white-blue LED. For details, refer to “White-blue electroluminescence from a Si quantum dot hybrid light-emitting diode,” in Applied Physics Letters; DOI: 10.1063/1.4921415.

The Si QD hybrid LED was developed using a simple method; almost all processes were solution-based and conducted at ambient temperature and pressure. Conductive polymer solutions and a colloidal Si QD solution were deposited on the glass substrate. The current and optical power densities of the LED are, respectively, 280 and 350 times greater than those reported previously for such a device at the same voltage (6 V). In addition, the active area of the LED is 4 mm2, which is 40 times larger than that of a typical commercial LED; the thickness of the LED is 0.5 mm.

“QD LED has attracted significant attention as a next-generation LED,” Professor Saitow said. “Although several breakthroughs will be required for achieving implementation, a QD-based hybrid LED allows us to give so fruitful feature that we cannot imagine.”

Suppliers of MEMS-based devices rode a safety sensing wave in 2014 to reach record turnover in automotive applications, according to analysis from IHS, the global source of critical information and insight.

Mandated safety systems such as Electronic Stability Control (ESC) and Tire Pressure Monitoring Systems (TPMS) – which attained full implementation in new vehicles in major automotive markets last year – are currently driving revenues for MEMS sensors. Those players with strong positions in gyroscopes, accelerometers and pressure sensors needed in these systems grew as well, while companies in established areas like high-g accelerometers for frontal airbags and pressure sensors for side airbags also saw success.

Major suppliers of pressure sensors to engines similarly blossomed – for staple functions like manifold absolute air intake and altitude sensing – but also for fast-growing applications like vacuum brake boosting, gasoline direct injection and fuel system vapor pressure sensing.

Bosch was the overall number one MEMS supplier with US$790 million of devices sold last year, close to three times that of its nearest competitor, Sensata (US$268 million). Bosch has a portfolio of MEMS devices covering pressure, flow, accelerometers and gyroscopes, and also has a leading position in more than 10 key applications. The company grew strongly in ESC and roll-over detection applications, and key engine measurements like manifold absolute pressure (MAP) and mass air flow on the air intake, vacuum brake booster pressure sensing and common rail diesel pressure measurement.

Compared to 2013, Sensata jumped to second place in 2014 ahead of Denso and Freescale, largely on strength in both safety and powertrain pressure sensors, but also through its acquisition of Schrader Electronics, which provides Sensata with a leading position among tire pressure-monitoring sensor suppliers.

While Sensata is dominant in TPMS and ESC pressure sensors, it also leads in harsh applications like exhaust gas pressure measurement. Freescale, on the other hand, is second to Bosch in airbag sensors and has made great strides in its supply of pressure sensors for TPMS applications.

Despite good results in 2014, Denso dropped two places compared to its overall second place in 2013, largely as a result of the weakened Yen. Denso excelled in MAP and barometric pressure measurement in 2014, but also ESC pressure and accelerometers. Denso has leadership in MEMS-based air conditioning sensing and pressure sensors for continuous variable transmission systems, and is also a supplier of exhaust pressure sensors to a major European OEM.

Secure in its fifth place, Analog Devices was again well positioned with its high-g accelerometers and gyroscopes in safety sensing, e.g. for airbag and ESC vehicle dynamics systems, respectively.

The next three players in the top 10, in order, Infineon, Murata and Panasonic, likewise have key sensors to offer for safety. Infineon is among the leading suppliers of pressure sensors to TPMS systems, while Murata and Panasonic serve ESC with gyroscope and accelerometers to major Tier Ones.

The top 10 represents 78 percent of the automotive MEMS market volume, which reached $2.6 billion in 2014. By 2021, this market will grow to $3.4 billion, a CAGR of 3.4 percent, given expected growth for four main sensors — pressure, flow, gyroscopes and accelerometers.  In addition, night-vision microbolometers from FLIR and ULIS and humidity sensors from companies like Sensirion and E+E Elektronik for window defogging will also add to the diversity of the mix in 2021.

Auto_MEMS_H1_2015_Graphic

DLP chips from Texas Instruments for advanced infotainment displays will similarly bolster the market further in future. More details can be found in the IHS Technology H1 2015 report on Automotive MEMS.

Read more: 

What’s next for MEMS?

Growing in maturity, the MEMS industry is getting its second wind

SEMI, the global industry association for companies that supply manufacturing technology and materials to the world’s chip makers, today reported that worldwide semiconductor manufacturing equipment billings reached US$9.52 billion in the first quarter of 2015. The billings figure is 7 percent higher than the fourth quarter of 2014 and 6 percent lower than the same quarter a year ago. The data is gathered jointly with the Semiconductor Equipment Association of Japan (SEAJ) from over 100 global equipment companies that provide data on a monthly basis.

Worldwide semiconductor equipment bookings were $9.66 billion in the first quarter of 2015. The figure is 2 percent lower than the same quarter a year ago and 3 percent lower than the bookings figure for the fourth quarter of 2014.

The quarterly billings data by region in billions of U.S. dollars, quarter-over-quarter growth and year-over-year rates by region are as follows:

Region

1Q2015

4Q2014

1Q2014

1Q15/4Q14

(Q-o-Q)

1Q15/1Q14

(Y-o-Y)

Korea

2.69

2.09

2.03

29%

33%

Taiwan

1.81

2.03

2.59

-11%

-30%

North America

1.47

1.83

1.85

-19%

-20%

Japan

1.26

1.11

0.96

13%

31%

China

1.17

0.68

1.71

73%

-32%

Europe

0.69

0.58

0.58

19%

19%

Rest of World

0.43

0.59

0.42

-27%

1%

Total

9.52

8.91

10.15

7%

-6%

Source: SEMI/SEAJ June 2015; Note: Figures may not add due to rounding.

BY DR. RANDHIR THAKUR, Executive Vice President, General Manager, Silicon Systems Group, APPLIED MATERIALS, INC

For 50 years, Moore’s Law has served as a guide for technologists everywhere in the world, setting the pace for the semiconductor industry’s innovation cycle. Moore’s Law has made a tremendous impact not only on the electronics industry, but on our world and our everyday life. It led us from the infancy of the PC era, through the formative years of the internet, to the adolescence of smartphones. Now, with the rise of the Internet of Things, market researchers forecast that in the next 5 years, the number of connected devices per person will more than double, so even after 50 years we don’t see Moore’s Law slowing down.

As chipmakers work tirelessly to continue device scaling, they are encountering daunting technical and economic hurdles. Increasing complexity is driving the need for new materials and new device architectures. Enabling these innovations and the node-over-node success of Moore’s Law requires advance- ments in precision materials engineering, including precision films, materials removal, materials modification and interface engineering, supported by metrology and inspection.

Though scaling is getting harder, I am confident Moore’s Law will continue because equipment suppliers and chipmakers never cease to innovate. As we face the increasing challenges of new technology inflections, earlier engagement in the development cycle between equipment suppliers and chipmakers is required to uncover new solutions. Such early and deep collaboration is critical to delivering complex precision materials engineering solutions on time. In fact, in the mobility era, earlier and deeper collaboration across the entire value chain is essential (applications, system/hardware, fabless, foundry/IDM, equipment supplier, chemical supplier, component supplier, etc.) to accelerate time to market and extend Moore’s Law.

Today, new 3D architectures, FinFET and 3D NAND, are enabling the extension of Moore’s Law. Dense 3D structures with high aspect ratios create fundamental challenges in device manufacturing. Further, the industry has shifted much of its historical reliance from litho-enabled scaling to materials-enabled scaling, requiring thinner precision films with atomic-scale accuracy. The emphasis on thin conformal films, which can be 2000 times smaller than a human hair, makes it increasingly critical to engineer film properties and manage film interactions between adjacent film surfaces. Selective processing is also a growing requirement, particularly for the deposition and removal of films. We expect more selective applications beyond Epitaxy and Cobalt liner deposition. There will also be a major expansion of new materials in addition to the key inflection of high-k metal gate that helped to reduce power leakage issues associated with scaling.

Gordon Moore’s prediction that ignited an industry will continue to influence our way of life through a combination of architecture and material changes. New process designs and new ways to atomically deposit materials are needed. More processes will be integrated on the same platform without vacuum breaks to create pristine interfaces. As an equipment supplier, we have to manage longer R&D cycles to support the industry’s roadmap, and plan for faster ramp and yield curves. Of utmost importance is staying close to our customers to ensure we deliver solutions with the desired economic and technical benefits.

Looking at the electronics industry from where it is today out to 2020, many more devices will be in use, the world will be more connected and, particularly in emerging markets, there will be greater consumer appetite for more products with advanced features. Given these transformations and demand, I think the growth and excitement in our industry will continue for many more years, thanks to Moore’s Law.

A new, low pH, BTA free, noble-bond chemistry produced equivalent yield at substantially lower costs.

BY CHRISTOPHER ERIC BRANNON, Texas Instruments, Dallas, TX

The 2010 economic downturn affected many industries, semiconductor manufacturing notwithstanding. Many fabrication facilities had to layoff employees and curtail spending, all the while managing lower wafer output. This effect caused many semiconductor companies to rethink how they spend on resources. Everything was considered, from the cost of the wafers to the cost of the tool consumables and chemistries.

Texas Instruments (TI) copper chemical-mechanical planarization (Cu CMP) was no different. All spending had to be reduced and copper hillock defect had to
be eliminated. The CMP Team proposed developing a process based on the new third generation clean chemistry on the market for a number of economic and logistical reasons. The first rationale for this strategy was cost and second was time – most of the clean chemis- tries on the market were considerably cheaper than the current process of record (POR). CMP had also seen many defects due to via-to-via shorts caused by Cu hillocking (localized Cu protrusion into the above interlayer dielectric; see FIGURE 1).

FIGURE 1. TEM of copper hillocks [1].

FIGURE 1. TEM of copper hillocks [1].

A successful Cu cleaning CMP process

There were two key reasons that TI succeeded in developing a Cu cleaning process: detailed engineering work and strong vendor support. Process development went through four generations of refinement before it was ready for high volume manufacturing. The first version focused on new clean chemistry improvements such as third generation low pH, high acid clean chemistry and an array of design of experiments (DOE) continuous improvement through optimization of the process controls and equipment modification followed in the second. The third generation attempted to adapt an existing Mirra-Desica process using a previous qualified process. A final successful attempt was made during the fourth cycle to develop a lower cost, higher throughput multi-copper platen cleaning process using a commercial chemistry from Air Products, COPPEREADY®CP72B. This paper will discuss the work that went into building TI’s successful Cu cleaning CMP process.

TI Cu CMP

Neutral pH clean chemistries using Benzotriazole (BTA) were the first generation application on most Cu CMP dual damascene back end of the line process at TI. This was dependent on using dry-in wet-out Cu CMP AMAT tools with spray acid Vertec hoods for cleaning and drying. It was also very high in cost and low in consumable life compared to most conventional CMP clean process (e.g. Tungsten, STI, Oxide). The TI POR was no different, a first generation Cu clean using three different chemistries, BTA, Electra Clean and ESC774TM. These chemistries were very expensive to use and were not very efficient at cleaning or passivating the polished copper surface. They were able to passivate the copper surface but were prone to leave many types of incompatible carbon residue defects on the wafers. Cu hillocking was very prevalent with this type of cleaning solution and via-to-via shorts in the back end of the line (BEOL) were the top defect pareto for TI.

Clean chemistry identification

To reduce the time to develop a new Cu CMP clean process, most of the development cycle focused on Cu cleans leveraging a Mirra-Desica DIDO Cu polishing process using existing pads, conditioning pucks, and heads. Early on, it was decided that to achieve maximum throughput, the wafers would need to be processed through the tool’s onboard scrubber and dry station as quickly as possible. With time running out, the Cu CMP team had contacted the major players in Cu clean chemistry to obtain their specific information and prepare a white paper screening to determine the correct path. The four candidates were evaluated on chemistry type, makeup, pH, passivation (BTA), cost, and compatibility to our current Cu and barrier slurry. Two of the chemistries fit the bill for the criteria and were selected for further testing. Chemistry 1 was a novel approach for Cu CMP and was from our current clean chemistry vendor, Chemistry 2 was similar to the current TI process of record.

The initial criteria used to judge the chemistries were blanket test wafer performance (Cu, Teos, Ta, and Nitride): etch rate, passivation, cleaning tunability via recipe parameter windowing, and defectivity. Experimental designs were run on the basic process controls with these chemistry’s with respect to the polish process: carrier speed, table speed, down-force, carrier position, carrier oscillation, and chemical flow. Both cleans performed well on the blanket experiments and were advanced to short loop, patterned wafer tests. These patterned wafer tests were used to study product behavior in the polisher and brush cleaner. A significant amount of time was spent adjusting recipe parameters to eliminate defects. The team contacted both vendors to do lifetime experiments with consumables at their facilities. The data that was collected revealed many issues with each candidate, one more so than the other (FIGURE 2).

FIGURE 2. Charts of Cu CMP defects showing effects of new clean chemistry.

FIGURE 2. Charts of Cu CMP defects showing effects of new clean chemistry.

Chemistry A was a second generation Cu clean that had high pH but had chemical additives that would aid in cleaning, still a very basic approach to wafer cleaning. The overall defectivity was sufficient on the product test wafers but would degrade after a short time window after polish. It also had to be paired with another chemistry to achieve the same Cu passivation as the POR. This chemistry was disqualified due to this reason.

Chemistry B is a third generation Cu clean that had low pH (about ~2.1) and it is BTA-free, unlike any other Cu cleans on the market at that time. This chemistry is an organic acid blend, which helps ionize Cu2O and CuO to form water and soluble Cu complex, used for passivation. This forms a strong bond with the Cu to make the surface nobel. The low pH helps to dissolve the surface defects resulting in a step function decrease in defectivity compared to baseline (see Figure 2). The chemistry was also scalable, depending on concentration making cost of ownership low. This chemistry was selected for qualification at TI Cu CMP.

Vendor support

TI’s internal polishing engineering staff was augmented with exceptional support from several consumable vendors during development. Together TI engineers developed proprietary and patent-pending technologies to enhance the Mirra Desica cleaner performance on Cu BEOL CMP. TI also benefited from strong relationships with its contact clean brush suppliers. Rippy was instrumental in brush evaluations and consul- tation on process developments. To improve the tool’s performance, DOW was pivotal in adding additional functionality to the process through end of life evaluations. Perhaps most important of all relationships that developed was with Air Products, who provided an invaluable education into Cu cleaning process development.

Solving defect issues

During process development, TI engineers encountered several defect related issues. Some issues like photo-induced corrosion were resolved quickly after some technical research. There were two others that took more troubleshooting: carbon residue defects and Cu hillock formation.

The presence of gross surface defects, like carbon residue is an obvious yield killer. The Cu CMP Engineers come to the conclusion through EDX (Energy-dispersive X-ray spectroscopy) and much lab analysis that the current Cu slurry still had traces of BTA in it and were causing this residue defect to form on the wafers after polish. Many DOE later determined that extending the clean chemistry buff polish would eliminate this defect.

With residue defects effectively eliminated, the next major technical challenge was Cu hillock formation. TI had been experiencing higher defectivity due to back end of the line via to via shorts on the previous Cu CMP clean chemistry process. It was understood that the formation of Cu hillocks were the cause for this signature. To solve this problem, a completely different wafer cleaning chemistry was needed to passivate the copper surface. TI Cu CMP Engineers looked for one that did not use BTA or other high pH chemistries, but, would coat the wafer surface and not allow native oxides to grow on the Cu. The new chemistry (CoppeReady®CP72B) proved to form a nobel bond with the Cu (CuO2) and eliminated hillock growth formation, thereby reducing via-to-via shorts (see FIGURE 3).

FIGURE 3. Metal 1 via etch contact pitting chart (dark vias induced by copper hillock).

FIGURE 3. Metal 1 via etch contact pitting chart (dark vias induced by copper hillock).

Further process development

One of the last stages of development on the new process was a project to develop a faster through-put process. Although this work was successful, it highlights some of the challenges in pursuing this type of strategy. The motivation for this work was to dramatically boost the throughput and to further cut process expense. The POR process was limited by the cleaner and was much slower causing higher cost and higher wafer-per-hour rates. To maximize throughput, the new process would have two components: speed up the on board cleaner, brush box 1&2 throughput, as well decrease the platen 2&3 process times but include a clean chemistry buff. Because of the high down forces employed to achieve a flat removal profile, the Cu polishing component of this work, platen 1, was surprisingly fast but was the intended bottle neck. These changes allowed for a 10 percent increase in overall wafer through put compared to the baseline process. This had an alternate effect on the Cu polish process. TI’s current Cu slurry is thermally driven, with making platen 1 the bottle neck it kept that platen at one constant temperature throughout the lot, causing the overall end point times (EPD) to be reduced and streamlined. This further increased the tools throughput by 2 percent and reduced wafer to wafer EPD variation down to 2 to 3 seconds; previous was 10 to 12 sec between wafers (see FIGURE 4).

FIGURE 4. Cu CMP end point charts, variation reduction, clean chemistry and throughput enhancements.

FIGURE 4. Cu CMP end point charts, variation reduction, clean chemistry and throughput enhancements.

Benchmarking performance

For initial qualification and benchmarking, TI installed and setup the best known method (BKM) Cu polishing process on an Applied Materials Mirra-Desica. To
bring the new clean process into production, Cu Polish engineers needed to demonstrate equivalent or better yield between the two competing process. The new clean chemistry needed to be tested for EM (electro migration), which is a stress test of Cu interconnects between two metal lines. This test had to be outsourced to a third party company that specializes in oven-baking stress tests (FIGURE 5). After extensive electrical and yield testing, the new clean process was fully released. Sample yield comparisons consistently demonstrated that the performance is equivalent to slightly better and the new process has higher through-put (~12 percent). The chemical costs (dilute 60 to 1 CP72B®) are 68 percent less per wafer pass than the competing process. The pad/ conditioner life had increased by 13 percent from the previous process due to thermal driven Cu slurry through put modification (FIGURES 6 AND 7).

FIGURE 5. Electromigration (EM) stress test, new clean vs baseline.

FIGURE 5. Electromigration (EM) stress test, new clean vs baseline.

FIGURE 6. Sample availability with the new clean chemistry improvements.

FIGURE 6. Sample availability with the new clean chemistry improvements.

FIGURE 7. Clean chemistry cost over time in Cu CMP in terms of lots processed.

FIGURE 7. Clean chemistry cost over time in Cu CMP in terms of lots processed.

Conclusion

TI engineers developed a Cu CMP cleaning process using new third generation low pH Cu chemistry. Despite the tool’s many limitations, the engineering staff successfully delivered an integrated process capable of producing equivalent yield at substantially lower costs over the best alternative method. There were undoubtedly challenges along the way, only a fraction of which have been described in this paper. By leveraging an existing deep reservoir of engineering, maintenance, and operational talent, an existing and efficient supply chain, and the outstanding support of numerous vendors, TI Polish module was able to realize its goal of making efficient use of its assets to achieve a competitive advantage.

References

1. Tsung-Kuei Kanga, and Wei-Yang Choub Author. ‘Avoiding Cu Hillocks during the Plasma Process’

Journal of The Electrochemical Society, 151

CHRISTOPHER ERIC BRANNON is a TI Cu CMP Manufacturing Engineering, Texas Instruments, Dallas, TX.

SEMI today announced the update of its World Fab Forecast report for 2015 and 2016. The report projects that semiconductor fab equipment spending (new, used, for Front End facilities) is expected to increase 11 percent (US$38.7 billion) in 2015 and another 5 percent ($40.7 billion) in 2016. Since February 2015, SEMI has made 282 updates to its detailed World Fab Forecast report, which tracks fab spending for construction and equipment, as well as capacity changes, and technology nodes transitions and product type changes by fab.   

Capital expenditure (capex without fabless and backend) by device manufacturers is forecast to increase almost 6 percent in 2015 and over 2 percent in 2016. Fab equipment spending is forecast to depart from the typical historic trend over the past 15 years of two years of spending growth followed by one of decline.  For the first time, equipment spending could grow every year for three years in a row: 2014, 2015, and 2016.

The SEMI World Fab Forecast Report, a “bottoms up” company-by-company and fab-by-fab approach, lists over 48 facilities making DRAM products and 32 facilities making NAND products. The report also monitors 36 construction projects with investments totaling over $5.6 billion in 2015 and 20 construction projects with investments of over $7.5 billion in 2016.  

According to the SEMI report, fab equipment spending in 2015 will be driven by Memory and Foundry ─ with Taiwan and Korea projected to become the largest markets for fab equipment at $10.6 billion and $9.3 billion, respectively. The market in the Americas is forecast to reach $6.1 billion, with Japan and China following at $4.5 and $4.4 billion, respectively. Europe/Mideast is predicted to invest $2.6 billion. The fab equipment market in South East Asia is expected to total $1.2 billion in 2015.

Learn more about the SEMI World Fab Forecast and plan to attend the SEMI/Gartner Market Symposium at SEMICON West 2015 on Monday, July 13 for an update on the semiconductor supply chain market outlook. In addition to presentations from Gartner analysts, Christian Dieseldorff of SEMI will present on “Trends and Outlook for Fabs and Fab Capacity” and Lara Chamness will present on “Semiconductor Wafer Fab Materials Market and Year-to-Date Front-End Equipment Trends.”   

Fab Equipment Spending
(for Front-End Facilities, includes new, used, in-house)

 

2014

(US$B)

2015

(US$B)

Year-over-Year

Americas

7.8

6.1

-22%

China

4.1

4.4

10%

Europe and Mideast

2.2

2.6

18%

Japan

3.8

4.5

17%

Korea

7.4

9.3

27%

SE Asia

1.1

1.2

2%

Taiwan

8.5

10.6

25%

Total

34.9

38.7

11%

Source: SEMI World Fab Forecast Reports (May 2015)Totals may not add due to rounding