Category Archives: Metrology

By Douglas G. Sutherland and David W. Price

Author’s Note: The Process Watch series explores key concepts about process control—defect inspection and metrology—for the semiconductor industry. Following the previous installments, which examined the 10 fundamental truths of process control, this new series of articles highlights additional trends in process control, including successful implementation strategies and the benefits for IC manufacturing.

While working at the Guinness® brewing company in Dublin, Ireland in the early-1900s, William Sealy Gosset developed a statistical algorithm called the T-test1. Gosset used this algorithm to determine the best-yielding varieties of barley to minimize costs for his employer, but to help protect Guinness’ intellectual property he published his work under the pen name “Student.” The version of the T-test that we use today is a refinement made by Sir Ronald Fisher, a colleague of Gosset’s at Oxford University, but it is still commonly referred to as Student’s T-test. This paper does not address the mathematical nature of the T-test itself but rather looks at the amount of data required to consistently achieve the ninety-five percent confidence level in the T-test result.

A T-test is a statistical algorithm used to determine if two samples are part of the same parent population. It does not resolve the question unequivocally but rather calculates the probability that the two samples are part of the same parent population. As an example, if we developed a new methodology for cleaning an etch chamber, we would want to show that it resulted in fewer fall-on particles. Using a wafer inspection system, we could measure the particle count on wafers in the chamber following the old cleaning process and then measure the particle count again following the new cleaning process. We could then use a T-test to tell if the difference was statistically significant or just the result of random fluctuations. The T-test answers the question: what is the probability that two samples are part of the same population?

However, as shown in Figure 1, there are two ways that a T-Test can give a false result: a false positive or a false negative. To confirm that the experimental data is actually different from the baseline, the T-test usually has to score less than 5% (i.e. less than 5% probability of a false positive). However, if the T-test scores greater than 5% (a negative result), it doesn’t tell you anything about the probability of that result being false. The probability of false negatives is governed by the number of measurements. So there are always two criteria: (1) Did my experiment pass or fail the T-test? (2) Did I take enough measurements to be confident in the result? It is that last question that we try to address in this paper.

Figure 1. A “Truth Table” highlights the two ways that a T-Test can give the wrong result.

Figure 1. A “Truth Table” highlights the two ways that a T-Test can give the wrong result.

Changes to the semiconductor manufacturing process are expensive propositions. Implementing a change that doesn’t do anything (false positive) is not only a waste of time but potentially harmful. Not implementing a change that could have been beneficial (false negative) could cost tens of millions of dollars in lost opportunity. It is important to have the appropriate degree of confidence in your results and to do so requires that you use a sample size that is appropriate for the size of the change you are trying to affect. In the example of the etch cleaning procedure, this means that inspection data from a sufficient number of wafers needs to be collected in order to determine whether or not the new clean procedure truly reduces particle count.

In general, the bigger the difference between two things, the easier it is to tell them apart. It is easier to tell red from blue than it is to distinguish between two different shades of red or between two different shades of blue. Similarly, the less variability there is in a sample, the easier it is to see a change2. In statistics the variability (sometimes referred to as noise) is usually measured in units of standard deviation (σ). It is often convenient to also express the difference in the means of two samples in units of σ (e.g., the mean of the experimental results was 1σ below the mean of the baseline). The advantage of this is that it normalizes the results to a common unit of measure (σ). Simply stating that two means are separated by some absolute value is not very informative (e.g., the average of A is greater than the average of B by 42). However, if we can express that absolute number in units of standard deviations, then it immediately puts the problem in context and instantly provides an understanding of how far apart these two values are in relative terms (e.g., the average of A is greater than the average of B by 1 standard deviation).

Figure 2 shows two examples of data sets, before and after a change. These can be thought of in terms of the etch chamber cleaning experiment we discussed earlier. The baseline data is the particle count per wafer before the new clean process and the results data is the particle count per wafer after the new clean procedure. Figure 2A shows the results of a small change in the mean of a data set with high standard deviation and figure 2B shows the results of the same sized change in the mean but with less noisy data (lower standard deviation). You will require more data (e.g., more wafers inspected) to confirm the change in figure 2A than in figure 2B simply because the signal-to-noise ratio is lower in 2A even though the absolute change is the same in both cases.

Figure 2. Both charts show the same absolute change, before and after, but 2B (right) has much lower standard deviation. When the change is small relative to the standard deviation as in 2A (left) it will require more data to confirm it.

Figure 2. Both charts show the same absolute change, before and after, but 2B (right) has much lower standard deviation. When the change is small relative to the standard deviation as in 2A (left) it will require more data to confirm it.

The question is: how much data do we need to confidently tell the difference? Visually, we can see this when we plot the data in terms of the Standard Error (SE). The SE can be thought of as the error in calculating the average (e.g., the average was X +/- SE). The SE is proportional to σ/√n where n is the sample size. Figure 3 shows the SE for two different samples as a function of the number of measurements, n.

Figure 3. The Standard Error (SE) in the average of two samples with different means. In this case the standard deviation is the same in both data sets but that need not be the case. With greater than x measurements the error bars no longer overlap and one can state with 95% confidence that the two populations are distinct.

Figure 3. The Standard Error (SE) in the average of two samples with different means. In this case the standard deviation is the same in both data sets but that need not be the case. With greater than x measurements the error bars no longer overlap and one can state with 95% confidence that the two populations are distinct.

For a given difference in the means and a given standard deviation we can calculate the number of measurements, x, required to eliminate the overlap in the Standard Errors of these two measurements (at a given confidence level).

The actual equation to determine the correct sample size in the T-test is given by,

Equation 1

Equation 1

where n is the required sample size, “Delta” is the difference between the two means measured in units of standard deviation (σ) and Zx is the area under the T distribution at probability x. For α=0.05 (5% chance of a false positive) and β=0.95 (5% chance of a false negative), Z1-α/2 and Zβ are equal to 1.960 and 1.645 respectively (Z values for other values of α and β are available in most statistics textbooks, Microsoft® Excel® or on the web). As seen in Figure 3 and shown mathematically in Eq 1, as the difference between the two populations (Delta) becomes smaller, the number of measurements required to tell them apart will become exponentially larger. Figure 4 shows the required sample size as a function of the Delta between the means expressed in units of σ. As expected, for large changes, greater than 3σ, one can confirm the T-test 95% of the time with very little data. As Delta gets smaller, more measurements are required to consistently confirm the change. A change of only one standard deviation requires 26 measurements before and after, but a change of 0.5σ requires over 100 measurements.

Figure 4. Sample size required to confirm a given change in the mean of two populations with 5% false positives and 5% false negatives

Figure 4. Sample size required to confirm a given change in the mean of two populations with 5% false positives and 5% false negatives

The relationship between the size of the change and the minimum number of measurements required to detect it has ramifications for the type of metrology or inspection tool that can be employed to confirm a given change. Figure 5 uses the results from figure 4 to show the time it would take to confirm a given change with different tool types. In this example the sample size is measured in number of wafers. For fast tools (high throughput, such as laser scanning wafer inspection systems) it is feasible to confirm relatively small improvements (<0.5σ) in the process because they can make the 200 required measurements (100 before and 100 after) in a relatively short time. Slower tools such as e-beam inspection systems are limited to detecting only gross changes in the process, where the improvement is greater than 2σ. Even here the measurement time alone means that it can be weeks before one can confirm a positive result. For the etch chamber cleaning example, it would be necessary to quickly determine the results of the change in clean procedure so that the etch tool could be put back into production. Thus, the best inspection system to determine the change in particle counts would be a high throughput system that can detect the particles of interest with low wafer-to-wafer variability.

Figure 5. The measurement time required to determine a given change for process control tools with four different throughputs (e-Beam, Broadband Plasma, Laser Scattering and Metrology)

Figure 5. The measurement time required to determine a given change for process control tools with four different throughputs (e-Beam, Broadband Plasma, Laser Scattering and Metrology)

Experiments are expensive to run. They can be a waste of time and resources if they result in a false positive and can result in millions of dollars of unrealized opportunity if they result in a false negative. To have the appropriate degree of confidence in your results you must use the correct sample size (and thus the appropriate tools) that correspond to the size of the change you are trying to affect.

References:

  1. https://en.wikipedia.org/wiki/William_Sealy_Gosset
  2. Process Watch: Know Your Enemy, Solid State Technology, March 2015

About the Authors:

Dr. David W. Price is a Senior Director at KLA-Tencor Corp. Dr. Douglas Sutherland is a Principal Scientist at KLA-Tencor Corp. Over the last 10 years, Dr. Price and Dr. Sutherland have worked directly with more than 50 semiconductor IC manufacturers to help them optimize their overall inspection strategy to achieve the lowest total cost. This series of articles attempts to summarize some of the universal lessons they have observed through these engagements.

In 2016, annual global semiconductor sales reached their highest-ever point, at $339 billion worldwide. In that same year, the semiconductor industry spent about $7.2 billion worldwide on wafers that serve as the substrates for microelectronics components, which can be turned into transistors, light-emitting diodes, and other electronic and photonic devices.

A new technique developed by MIT engineers may vastly reduce the overall cost of wafer technology and enable devices made from more exotic, higher-performing semiconductor materials than conventional silicon.

The new method, reported today in Nature, uses graphene — single-atom-thin sheets of graphite — as a sort of “copy machine” to transfer intricate crystalline patterns from an underlying semiconductor wafer to a top layer of identical material.

The engineers worked out carefully controlled procedures to place single sheets of graphene onto an expensive wafer. They then grew semiconducting material over the graphene layer. They found that graphene is thin enough to appear electrically invisible, allowing the top layer to see through the graphene to the underlying crystalline wafer, imprinting its patterns without being influenced by the graphene.

Graphene is also rather “slippery” and does not tend to stick to other materials easily, enabling the engineers to simply peel the top semiconducting layer from the wafer after its structures have been imprinted.

Jeehwan Kim, the Class of 1947 Career Development Assistant Professor in the departments of Mechanical Engineering and Materials Science and Engineering, says that in conventional semiconductor manufacturing, the wafer, once its crystalline pattern is transferred, is so strongly bonded to the semiconductor that it is almost impossible to separate without damaging both layers.

“You end up having to sacrifice the wafer — it becomes part of the device,” Kim says.

With the group’s new technique, Kim says manufacturers can now use graphene as an intermediate layer, allowing them to copy and paste the wafer, separate a copied film from the wafer, and reuse the wafer many times over. In addition to saving on the cost of wafers, Kim says this opens opportunities for exploring more exotic semiconductor materials.

“The industry has been stuck on silicon, and even though we’ve known about better performing semiconductors, we haven’t been able to use them, because of their cost,” Kim says. “This gives the industry freedom in choosing semiconductor materials by performance and not cost.”

Kim’s research team discovered this new technique at MIT’s Research Laboratory of Electronics. Kim’s MIT co-authors are first author and graduate student Yunjo Kim; graduate students Samuel Cruz, Babatunde Alawonde, Chris Heidelberger, Yi Song, and Kuan Qiao; postdocs Kyusang Lee, Shinhyun Choi, and Wei Kong; visiting research scholar Chanyeol Choi; Merton C. Flemings-SMA Professor of Materials Science and Engineering Eugene Fitzgerald; professor of electrical engineering and computer science Jing Kong; and assistant professor of mechanical engineering Alexie Kolpak; along with Jared Johnson and Jinwoo Hwang from Ohio State University, and Ibraheem Almansouri of Masdar Institute of Science and Technology.

Graphene shift

Since graphene’s discovery in 2004, researchers have been investigating its exceptional electrical properties in hopes of improving the performance and cost of electronic devices. Graphene is an extremely good conductor of electricity, as electrons flow through graphene with virtually no friction. Researchers, therefore, have been intent on finding ways to adapt graphene as a cheap, high-performance semiconducting material.

“People were so hopeful that we might make really fast electronic devices from graphene,” Kim says. “But it turns out it’s really hard to make a good graphene transistor.”

In order for a transistor to work, it must be able to turn a flow of electrons on and off, to generate a pattern of ones and zeros, instructing a device on how to carry out a set of computations. As it happens, it is very hard to stop the flow of electrons through graphene, making it an excellent conductor but a poor semiconductor.

Kim’s group took an entirely new approach to using graphene in semiconductors. Instead of focusing on graphene’s electrical properties, the researchers looked at the material’s mechanical features.

“We’ve had a strong belief in graphene, because it is a very robust, ultrathin, material and forms very strong covalent bonding between its atoms in the horizontal direction,” Kim says. “Interestingly, it has very weak Van der Waals forces, meaning it doesn’t react with anything vertically, which makes graphene’s surface very slippery.”

Copy and peel

The team now reports that graphene, with its ultrathin, Teflon-like properties, can be sandwiched between a wafer and its semiconducting layer, providing a barely perceptible, nonstick surface through which the semiconducting material’s atoms can still rearrange in the pattern of the wafer’s crystals. The material, once imprinted, can simply be peeled off from the graphene surface, allowing manufacturers to reuse the original wafer.

The team found that its technique, which they term “remote epitaxy,” was successful in copying and peeling off layers of semiconductors from the same semiconductor wafers. The researchers had success in applying their technique to exotic wafer and semiconducting materials, including indium phosphide, gallium arsenenide, and gallium phosphide — materials that are 50 to 100 times more expensive than silicon.

Kim says that this new technique makes it possible for manufacturers to reuse wafers — of silicon and higher-performing materials — “conceptually, ad infinitum.”

An exotic future

The group’s graphene-based peel-off technique may also advance the field of flexible electronics. In general, wafers are very rigid, making the devices they are fused to similarly inflexible. Kim says now, semiconductor devices such as LEDs and solar cells can be made to bend and twist. In fact, the group demonstrated this possibility by fabricating a flexible LED display, patterned in the MIT logo, using their technique.

“Let’s say you want to install solar cells on your car, which is not completely flat — the body has curves,” Kim says. “Can you coat your semiconductor on top of it? It’s impossible now, because it sticks to the thick wafer. Now, we can peel off, bend, and you can do conformal coating on cars, and even clothing.”

Going forward, the researchers plan to design a reusable “mother wafer” with regions made from different exotic materials. Using graphene as an intermediary, they hope to create multifunctional, high-performance devices. They are also investigating mixing and matching various semiconductors and stacking them up as a multimaterial structure.

“Now, exotic materials can be popular to use,” Kim says. “You don’t have to worry about the cost of the wafer. Let us give you the copy machine. You can grow your semiconductor device, peel it off, and reuse the wafer.”

At SEMICON Southeast Asia 2017, Dr. Chen Fusen, CEO of Kulicke & Soffa Pte Ltd, Singapore, will give a keynote on digital transformation in the manufacturing sector. Chen believes that Smart Manufacturing, or Industry 4.0, is no longer hype but real, and Asia needs to get on board sooner rather than later. SEMICON Southeast Asia (SEA) 2017, held at the SPICE arena in Penang on 25-27 April, is Asia’s premier showcase for electronics manufacturing innovation.

“Digital transformation has proven to provide solutions for addressing challenges in the manufacturing industry but there is still the issue of acceptance as well as lack of skills and knowledge that needs to be addressed,” said Chen. “With disruptive technology changing our world, I expect that more companies will see the value of their investments realised as this technology accelerates the creation of more individualised products and services.”

Dr. Hai Wang from NXP Semiconductors Singapore Pte Ltd agreed that more consumer-related innovations would stem from digital transformation as demand for solutions that provide efficiency and security increases. “At NXP, we look at developing advanced cyber security solutions for the automotive industry, such as tracking and analysing intelligence around connected and automated vehicles, which will help to counter any adverse threats in real time. These innovations are real and will soon mark a shift in the future of automation and manufacturing. It is vital that we embrace the change and adapt accordingly,” he said.

Other speakers at SEMICON SEA also feel strongly about the importance of Smart Manufacturing and digital transformation. David Chang of HTC Corporation, Taiwan, sees a dramatic shift in the value of being a “smart” manufacturer to address to the rising demand in consumer products and services innovation. “We have seen virtual reality technology offered by products such as HTC VIVE(TM) really shaping the future of the world. Transformative innovations such as this will pave the way for disruptive technology to be coupled into business models to benefit consumers in the long term,” he said.

These three speakers will join a long list of thought leaders from the electronics manufacturing sector – including Jamie Metcalfe from Mentor Graphics U.S., Chiang Gai Kit from Omron Asia Pacific Singapore, Ranjan Chatterjee from Cimetrix U.S. and Duncan Lee from Intel Products Malaysia – to speak at SEMICON SEA 2017. Topics discussed will cover issues relevant to the transformation of the manufacturing industry ranging from next-generation manufacturing to system-level integration, including exhibitions that will highlight the market and technology trends that are driving investment and growth in all sectors across the region.

The conference also aims to champion regional collaboration through new business opportunities for customers and foster stronger cross-regional engagement through reaching buyers, engineers and key decision-makers in the Southeast Asia microelectronics industry, including buyers from Malaysia, Singapore, Thailand, Indonesia, the Philippines, and Vietnam.

Learn more about SEMICON Southeast Asia 2017 in Penang, Malaysia on 25-27 April: http://www.semiconsea.org/.

Worldwide semiconductor revenue is forecast to total $386 billion in 2017, an increase of 12.3 percent from 2016, according to Gartner, Inc. Favorable market conditions that gained momentum in the second half of 2016, particularly for commodity memory, have accelerated and raised the outlook for the market in 2017 and 2018. However, the memory market is fickle, and additional capacity in both DRAM and NAND flash is expected to result in a correction in 2019.

“While price increases for both DRAM and NAND flash memory are raising the outlook for the overall semiconductor market, it will also put pressure on margins for system vendors of smartphones, PCs and servers,” said Jon Erensen, research director at Gartner. “Component shortages, a rising bill of materials, and the prospect of having to counter by raising average selling prices (ASPs) will create a volatile market in 2017 and 2018.”

PC DRAM pricing has doubled since the middle of 2016. A 4GB module that cost $12.50 has jumped to just under $25 today. NAND flash ASPs increased sequentially in the second half of 2016 and the first quarter of 2017. Pricing for both DRAM and NAND is expected to peak in the second quarter of 2017, but relief is not expected until later in the year as content increases in key applications, such as smartphones, have vendors scrambling for supply.

“With memory vendors expanding their margins though 2017, the temptation will be to add new capacity,” said Mr. Erensen. “We also expect to see China make a concerted effort to join the memory industry, setting the market up for a downturn in 2019.”

Unit production estimates for premium smartphones, graphics cards, video game consoles and automotive applications have improved and contributed to the stronger outlook in 2017. In addition, electronic equipment with heavy exposure to DRAM and NAND flash saw semiconductor revenue estimates increase. This includes PCs, ultramobiles, servers and solid-state drives.

“The outlook for emerging opportunities for semiconductors in the Internet of Things (IoT) and wearable electronics remains choppy with these markets still in the early stages of development and too small to have a significant impact on overall semiconductor revenue growth in 2017,” said Mr. Erensen.

Silicon Integration Initiative, Inc. (Si2), a integrated circuit research and development joint venture, has contributed new power modeling technology to the IEEE P2416 System Level Power Model Working Group. The transfer is aimed at creating a standardized means for modeling systems-on-chip (SoC) designed for lower power consumption.

Jerry Frenkil, Si2 director of OpenStandards, said that the Si2 Low Power Working Group developed the new technology to fill several holes in the flow for estimating and controlling SoC power consumption. “This new modeling technology provides accurate and efficient, early estimation of both static and dynamic power, including critical temperature dependencies, using a consistent model throughout the design flow. There’s currently no standard way to represent power data for use at the system level, especially across a range of process, voltage and temperature points in a single model.”

IEEE P2416 is an essential component of IEEE’s coordinated effort to improve system-level design. This effort also includes the IEEE 1801 standard, which expresses design intent. Its latest update, IEEE 1801-2015, includes support for power-state modeling. “P2416 provides power data representations to complement 1801 power-state modeling. Together, 1801 and 2416 will form a complete power model for hardware IP at any level of abstraction,” Frenkil added.

Organizations that contributed to the model development are: ANSYS, Cadence, Intel, IBM, Entasys, and North Carolina State University.

Nagu Dhanwada, senior technical staff member at IBM, chairs both the IEEE P2416 and Si2 Power Modeling Working Groups. According to Dhanwada, “This is a major contribution to the P2416 effort. As the first technology contribution to the P2416 Working Group, it’s expected to form a solid foundation for the resulting standard.”

“This new modeling technology is the first significant advance in power modeling in quite a long time,” said Paul Traynar, technical fellow at ANSYS and a contributor to the Si2 effort. “It will enable SoC designers to get consistent power estimates across design abstractions and especially early in the system design process.”

Julien Sebot, CPU architect at Intel and a member of the IEEE P2416 Working Group, added, “The Si2 contribution addresses the top priorities identified by the P2416 Working Group. The ability to create accurate, early estimates and to reuse and refine those estimates during the design process is essential in creating energy efficient systems-on-chip. Si2’s contribution is a major step toward addressing that need.”

The IEEE P2416 Working Group has already started reviewing the Si2 contribution. In parallel, Si2 will further develop, for its members, the technology with expanded model semantics, proof-of-concept demonstrations, and reference design implementations.

This model and its use will be described as part of a DAC 2017 tutorial, “How Power Modeling Standards Power Your Designs,” Monday, June 19, 3:30-5:00 p.m., Room 18AB, Austin Convention Center.

MagnaChip Semiconductor Corporation (NYSE: MX), a Korea-based designer and manufacturer of analog and mixed-signal semiconductor products, announced today that it will host its Annual U.S. Foundry Technology Symposium at Hilton Santa Clara, California, on June 7th, 2017.

The primary purpose of the Foundry Technology Symposium is to showcase MagnaChip’s most up-to-date technology offerings and to provide an in-depth understanding of MagnaChip’s manufacturing capabilities, its specialty technology processes, target applications and end-markets. Furthermore, during the symposium, MagnaChip plans to discuss current and future semiconductor foundry business trends, and also cover presentations in key markets through guest speeches.

While providing an in-depth overview of its specialty processes, MagnaChip will also highlight its technology portfolio and its future roadmap, including technologies such as mixed-signal, which supports applications in the Internet of Things (IoT) and RF switch sector and Bipolar-CMOS-DMOS (BCD) for high-performance analog and power management applications. In addition, MagnaChip will also feature applications regarding Ultra-High Voltage (UHV), such as LED lighting and AC-DC chargers, and cover Non-Volatile Memory (NVM)-related technologies, such as Touch IC, Automotive MCUs and other customer specific applications. Furthermore, MagnaChip will present its technologies used in applications including smartphones, tablet PCs, automotive, industrial, LED lighting and the wearables segments. MagnaChip will also review its customer-friendly design environment and an on-line customer service tool known as “iFoundry.”

“We are very pleased to host MagnaChip’s Annual Foundry Technology Symposium in the US again this year,” said YJ Kim, Chief Executive Officer of MagnaChip. “We plan to offer participants an opportunity to better understand the foundry and the application market dynamics, and to provide insights into MagnaChip’s specialty process technologies.” MagnaChip has approximately 466 proprietary process flows it can utilize and offer to its foundry customers.

Analogix Semiconductor, Inc. and Beijing Shanhai Capital Management Co, Ltd. (Shanhai Capital), today jointly announced the completion of the approximately $500 million acquisition of Analogix Semiconductor. China Integrated Circuit Industry Investment Fund Co., Ltd. (China IC Fund) joined Shanhai Capital’s fund as one of the limited partners.

“We are very pleased to have completed the transaction,” said Dr. Kewei Yang, Analogix Semiconductor’s chairman and CEO. “Enhanced by the strong financial support of our new investors, Analogix’s future is brighter than ever. We are excited to continue building and growing Analogix into a global leader in high-performance semiconductors.”

“As Analogix’s key financial partner and investor, we look forward to leveraging our resources to accelerate the company’s growth into new markets,” said Mr. Xianfeng Zhao, Chairman of Shanhai Capital. “We will build on the strength of the company’s core technology and customer relationships to create an exceptional semiconductor company that will be publicly listed in China.”

Sino-American International Investment Ltd, and Needham & Company, LLC served as financial advisors to Analogix Semiconductor. O’Melveny & Myers LLP served as legal counsel to Analogix Semiconductor.

Pillsbury Winthrop Shaw Pittman LLP and Jingtian & Gongcheng acted as legal counsel to Beijing Shanhai Capital Management Co.

IHS Markit (Nasdaq: INFO) announced that the worldwide semiconductor market showed signs of recovery in 2016 following a down year in 2015. In 2016, the market posted a year-end growth rate of 2 percent with chip growth seen across multiple market segments. Global revenue came in at $352.4 billion, up from $345.6 billion in 2015.

Key growth drivers

Key drivers of this growth were DRAM and NAND flash memory, which grew more than 30 percent collectively in the second half of 2016. Key to this turnaround was supply constraints and strong demand, coupled with an ASP increase. We expect these factors to drive memory revenue into record territory throughout 2017.

Semiconductors used for automotive applications were also a key driver of 2016 growth, with a 9.7 percent expansion by year-end. Chip content in cars continues to climb, with micro components and memory integrated circuits (IC) leading the pack, both experiencing over 10 percent growth in automotive applications.

“The strong component demand that drove record capital expenditures in 2016 also provided the industry with advanced technology platforms which will support further semiconductor revenue growth in 2017,” said Len Jelinek, Senior Director and Chief Analyst for Semiconductor Manufacturing at IHS Markit.

Continued consolidation

Continuing a recent trend, the semiconductor market saw another year of intense consolidation with no signs of slowing down. The year began with the close of the biggest-ever acquisition in the semiconductor industry. Avago Technologies finalized its $37 billion acquisition of Broadcom Corp. to form Broadcom Limited, which jumped to rank fourth in terms of market share (Avago previously ranked 11th). This acquisition resulted in the newly formed company increasing its market share in several market segments, including taking a large lead in the wired application market.

“After some selective divestiture, Broadcom Limited has focused on market segments where its customer base holds dominant market share positions. These also tend to be markets which have fairly stable and visible TAM growth,” said Senior Analyst Brad Shaffer. “These characteristics may help entrench the company’s market share positions in areas where it chooses to compete,” added Shaffer.

Among the top 20 semiconductor suppliers, ON Semiconductor and nVidia enjoyed the largest revenue growth, followed closely by MediaTek. ON and MediaTek achieved growth through multiple acquisitions, while nVidia saw an enormous demand for its GPU technology as it moves into new markets and applications.

Qualcomm remained the top fabless company in 2016 while MediaTek and nVidia moved into the number two and three spots, respectively. The fabless company with the largest market share gain was Cirrus Logic, a major supplier for Apple and Samsung mobile phones. They moved up five spots in 2016, to number 10.

Intel remains in the number one spot for semiconductor suppliers, followed by Samsung. Qualcomm comes in at number three, with plans to increase its market share in 2017 with its pending acquisition of NXP.

Find more information on this topic in the latest release of the Competitive Landscaping Tool from the Semiconductors & Components service at IHS Markit.

The Semiconductor Industry Association (SIA), representing U.S. leadership in semiconductor manufacturing, design, and research, today announced worldwide sales of semiconductors reached $30.4 billion for the month of February 2017, an increase of 16.5 percent compared to the February 2016 total of $26.1 billion. Global sales in February were 0.8 percent lower than the January 2017 total of $30.6 billion, exceeding normal seasonal market performance. February marked the global market’s largest year-to-year growth since October 2010. All monthly sales numbers are compiled by the World Semiconductor Trade Statistics (WSTS) organization and represent a three-month moving average.

“The global semiconductor industry has posted strong sales early in 2017, with memory products like DRAM and NAND flash leading the way,” said John Neuffer, president and CEO, Semiconductor Industry Association. “Year-to-year sales increased by double digits across most regional markets, with the China and Americas markets showing particularly strong growth. Global market trends are favorable for continuing sales growth in the months ahead.”

Year-to-year sales increased across all regions: China (25.0 percent), the Americas (19.1 percent), Japan (11.9 percent), Asia Pacific/All Other (11.2 percent), and Europe (5.9 percent). Month-to-month sales increased modestly in Asia Pacific/All Other (0.5 percent) but decreased slightly across all others: Europe (-0.6 percent), Japan (-0.9 percent), China (-1.0 percent), and the Americas (-2.3 percent).

Neuffer also noted the recent growth of foreign semiconductor markets is a reminder of the importance of expanding U.S. semiconductor companies’ access to global markets, which is one of SIA’s policy priorities for 2017. The U.S. industry accounts for nearly half of the world’s total semiconductor sales, and more than 80 percent of U.S. semiconductor company sales are to overseas markets, helping make semiconductors one of America’s top exports.

February 2017

Billions

Month-to-Month Sales                               

Market

Last Month

Current Month

% Change

Americas

6.13

5.99

-2.3%

Europe

2.84

2.82

-0.6%

Japan

2.79

2.77

-0.9%

China

10.15

10.05

-1.0%

Asia Pacific/All Other

8.72

8.76

0.5%

Total

30.64

30.39

-0.8%

Year-to-Year Sales                          

Market

Last Year

Current Month

% Change

Americas

5.03

5.99

19.1%

Europe

2.66

2.82

5.9%

Japan

2.47

2.77

11.9%

China

8.04

10.05

25.0%

Asia Pacific/All Other

7.88

8.76

11.2%

Total

26.08

30.39

16.5%

Three-Month-Moving Average Sales

Market

Sept/Oct/Nov

Dec/Jan/Feb

% Change

Americas

6.25

5.99

-4.2%

Europe

2.88

2.82

-2.3%

Japan

2.90

2.77

-4.6%

China

10.04

10.05

0.1%

Asia Pacific/All Other

8.94

8.76

-2.0%

Total

31.02

30.39

-2.0%

 

A coalition of leaders from the global tech, defense, and aerospace industries, led by the Semiconductor Industry Association (SIA) and Semiconductor Research Corporation (SRC), today released a report identifying the key areas of scientific research needed to advance innovation in semiconductor technology and fulfill the promise of emerging technologies such as artificial intelligence (AI), the Internet of Things (IoT), and supercomputing. The report, titled Semiconductor Research Opportunities: An Industry Vision and Guide, also calls for robust government and industry investments in research to unlock new technologies beyond conventional, silicon-based semiconductors and to advance next-generation semiconductor manufacturing methods.

“Semiconductor technology is foundational to America’s innovation infrastructure and global technology leadership,” said John Neuffer, president and CEO of SIA, which represents U.S. leadership in semiconductor manufacturing, design, and research. “Our industry has pushed Moore’s Law to levels once unfathomable, enabling technologies that have driven economic growth and transformed society. Now, as it becomes increasingly challenging and costly to maintain the breakneck pace of putting more transistors on the same size of silicon real estate, industry, academia, and government must intensify research partnerships to explore new frontiers of semiconductor innovation and to foster the continued growth of emerging technologies. Taking swift action to implement the recommendations from the Vision report will help usher in a new era of semiconductor technology and keep America at the head of the class in technological advancement.”

Neuffer also noted concern in the tech, research, and academic communities about proposed cuts to basic scientific research outlined in the Trump Administration’s fiscal year 2018 budget blueprint. Basic scientific research funded through agencies such as the National Science Foundation (NSF), the National Institute of Standards and Technology (NIST), the Defense Advanced Research Projects Agency (DARPA), and the Department of Energy (DOE) Office of Science has yielded tremendous dividends, helping launch technologies that underpin America’s economic strength and global competiveness. The U.S. semiconductor industry invests about one-fifth of revenue each year in R&D – the highest share of any industry. Neuffer expressed the semiconductor industry’s readiness to work with the Administration and Congress to enact a budget that embraces the strategic importance of research investments to America’s continued economic and technological strength.

“Continued and predictable advancements in semiconductor technology have fueled the growth of many industries, including those historically based on mechanics such as automotive,” said Ken Hansen, president & CEO of SRC. “As the rate of dimensional scaling has slowed, the need to reinvigorate the investment in semiconductor research has become increasingly clear. Now is the time for industry, government, and academia to double down their resources and efforts to ensure the pace of renewal continues. Alternative strategies and techniques to the traditional scaling for performance are now being explored by SRC. Furthermore, with the support of SIA, SRC is building research programs that align with the Vision report, including complimentary technologies such as advanced packaging and communications. An infusion of funding is vital to expand the research breadth beyond the historical focus areas, enabling the industry to keep its promise of a continuous stream of products with improved performance at reduced cost. As industries look to future areas of growth and innovation, SIA and SRC are laying the groundwork for new discoveries through fundamental research.”

The Vision report is the culmination of work by a diverse group of industry experts and leaders, including chief technology officers at numerous leading semiconductor companies, who came together over a nine-month period in 2016-2017 to identify areas in which research is essential to progress. The report, which will be updated periodically moving forward, has active participation from the industry’s leading chip makers, fabless companies, IP providers, equipment and material suppliers, and research organizations. It will serve as a foundational guide for defining the semiconductor industry’s future research paths in 14 distinct but complimentary research areas. These areas, outlined in the Vision report, are as follows:

1. Advanced Devices, Materials, and Packaging2. Interconnect Technology and Architecture

3. Intelligent Memory and Storage

4. Power Management

5. Sensor and Communication Systems

6. Distributed Computing and Networking

7. Cognitive Computing

8. Bio-Influenced Computing and Storage9. Advanced Architectures and Algorithms

10. Security and Privacy

11. Design Tools, Methodologies, and Test

12. Next-Generation Manufacturing Paradigm

13. Environmental Health and Safety: Materials and Processes

14. Innovative Metrology and Characterization