Tag Archives: letter-mems-top

SEMI today announced an update of the SEMI World Fab Forecast report which updates outlooks for 2015 and 2016. The SEMI report reveals that fab equipment spending in 2014 increased almost 20 percent and will rise 15 percent in 2015, increasing only 2-4 percent in 2016. Since November 2014, SEMI has made 270 updates on its World Fab Forecast report, which tracks fab spending for construction and equipment, as well as capacity changes, and technology nodes transitions and product type changes by fab.

2013

2014

2015

2016

Fab equipment*

$29.4

$35.2

$40.5

$41 to $42

Change % Fab equipment

-10.0%

19.8%

15.0%

2% to 4%

Fab construction US$

$8.8

$7.7

$5.2

$6.9

Change % construction

13.6%

-11.0%

-32.0%

+32.0%

* Chart US$, in billions; Source: SEMI, March 2015

The SEMI World Fab Forecast and its related Fab Database reports track any equipment needed to ramp fabs, upgrade technology nodes, and expand or change wafer size, including new equipment, used equipment, or in-house equipment and spending on facilities for installation.

Fab spending, such as construction spending and equipment spending, are fractions of a company’s total capital expenditure (capex). Typically, if capex shows a trend to increase, fab spending will follow.  Capex for most of the large semiconductor companies is expected to increase by 8 percent in 2015, and grow another 3 percent in 2016. These increases are driven by new fab construction projects and also ramp of new technology nodes. Spending on construction projects, which typically represents new cleanroom projects, will experience a significant -32 percent decline in 2015, but is expected to rebound by 32 percent in 2016.

Comparing regions across the world, according to SEMI, the highest fab equipment spending in 2015 will occur in Taiwan, with US$ 11.9 billion, followed by Korea with US$ 9 billion.  The region with third largest spending, the Americas, is forecast to spend about US$ 7 billion.  Yet growth will decline in the Americas, by 12 percent in 2015, and decline by 12 percent in 2016 again.  Fourth in spending is China, with US$ 4.7 billion in 2015 and US$ 4.2 billion in 2016. In other regions, Japan’s spending will grow by about 6 percent in 2015, to US$ 4 billion; and 2 percent in 2016, to US$ 4.2 billion.  The Europe/Mideast region will see growth of about 20 percent (US$ 2.7 billion) in 2015 and over 30 percent (US$ 3.5 billion) in 2016. South East Asia is expected to grow by about 15 percent (US$ 1.3 billion) in 2015 and 70 percent (US$ 2.2 billion) in 2016.

2015 is expected to be the second consecutive year in equipment spending growth. SEMI’s positive outlook for the year is based on spending trends tracked as part of our fab investment research. The “bottom’s up” company-by-company and fab-by-fab approach points to strong investments by foundries and memory companies driving this year’s growth.

The SEMI World Fab Forecast Report lists over 40 facilities making DRAM products. Many facilities have major spending for equipment and construction planned for 2015.

Pulsed measurements are defined in Part 1, and common pulsed measurement challenges are discussed in Part 2.

By DAVID WYBAN, Keithley Instruments, a Tektronix Company, Solon, Ohio

Performing a DC measurement starts with applying the test signal (typically a DC voltage), then waiting long enough for all the transients in the DUT and the test system to settle out. The measurements themselves are typically performed using a sigma-delta or integrating-type analog-to-digital converter (ADC). The conversion takes place over one or more power line cycles to eliminate noise in the measurements due to ambient power line noise in the test environment. Multiple measurements are often averaged to increase accuracy. It can take 100ms or longer to acquire a single reading using DC measurement techniques.

In contrast, pulsed measurements are fast. The test signal is applied only briefly before the signal is returned to some base level. To fit measurements into these short windows, sigma-delta ADCs are run at sub-power-line interval integration times; sometimes, the even faster successive approximation register (SAR) type ADCs are used. Because of these high speeds, readings from pulsed measurements are noisier than readings returned by DC measurements. However, in on-wafer semiconductor testing, pulse testing techniques are essential to prevent device damage or destruction. Wafers have no heat sinking to pull away heat generated by current flow; if DC currents were used, the heat would increase rapidly until the device was destroyed. Pulse testing allows applying test signals for very short periods, avoiding this heat buildup and damage.

Why use pulsed measurements?

The most common reason for using pulsed measurements is to reduce joule heating (i.e., device self-heating). When a test signal is applied to a DUT, the device consumes power and turns it into heat, increasing the device’s temperature. The longer that power is applied, the hotter the device becomes, which affects its electrical characteristics. If a DUT’s temperature can’t be kept constant, it can’t be characterized accurately. However, with pulsed testing, power is only applied to the DUT briefly, minimizing self-heating. Duty cycles of 1 percent or less are recommended to reduce the average power dissipated by the device over time. Pulsed measurements are designed to minimize the power applied to the device so much that its internal temperature rise is nearly zero, so heating will have little or no effect on the measurements.

Because they minimize joule heating, pulsed measurements are widely used in nanotechnology research, such as when characterizing delicate materials and structures like CNT FETs, semiconductor nanowires, graphene-based devices, molecular- based electronics and MEMs structures. The heat produced with traditional DC measurement techniques could easily alter or destroy them.

To survive high levels of continuous DC power, devices like MOSFETs and IGBTs require packaging with a solid metal backing and even heat-sinking. However, during the early stages of device development, packaging these experimental devices would be much too costly and time consuming, so early testing is performed at the wafer level. Because pulsed testing minimizes the power applied to a device, it allows for complete characterization of these devices on the probe station, reducing the cost of test.

The reduction in joule heating that pulsed testing allows also simplifies the process of characterizing devices at varying temperatures. Semiconductor devices are typically so small that it is impossible
to measure their temperature directly with a probe. With pulsed measurements, however, the self- heating of the device can be made so insignificant that its internal temperature can be assumed to be equal to the surrounding ambient temperature. To characterize the device at a specific temperature, simply change the surrounding ambient temperature with a thermal chamber or temperature-controlled heat sink. Once the device has reached thermal equilibrium at the new ambient temperature, repeat the pulsed measurements to characterize the device at the new temperature.

Pulsed measurements are also useful for extending instruments’ operating boundaries. A growing number of power semiconductor devices are capable of operating at 100A or higher, but building an instrument capable of sourcing this much DC current would be prohibitive. However, when delivering pulse mode power, these high power outputs are only for very short intervals, which can be done by storing the required energy from a smaller power supply within capacitors and delivering it all in one short burst. This allows instruments like the Model 2651A High Power SourceMeter SMU instrument to combine sourcing up to 50A with precision current and voltage measurements.

Pulsed I-V vs. transient measurements

Pulsed measurements come in two forms, pulsed I-V and transient. Pulsed I-V (FIGURE 1) is a technique for gathering DC-like current vs. voltage curves using pulses rather than DC signals. In the pulsed I-V technique, the current and voltage is measured near the end of the flat top of the pulse, before the falling edge. In this technique, the shape of the pulse is extremely important because it determines the quality of the measurement. If the top of the pulse has not settled before this measurement is taken, the resulting reading will be noisy and or incorrect. Sigma-delta or integrating ADCs should be configured to perform their conversion over as much of this flat top as possible to maximize accuracy and reduce measurement noise.

FIGURE 1. Pulse I-V technique.

FIGURE 1. Pulse I-V technique.

Two techniques can improve the accuracy of pulsed I-V measurements. If the width of the pulse and measurement speed permit, multiple measurements made during the flat portion of the pulse can be averaged together to create a “spot mean” measurement. This technique is commonly employed with instruments that use high speed Summation Approximation Register (SAR) ADCs, which perform conversions quickly, often at rates of 1μs per sample or faster, thereby sacrificing resolution for speed. At these high speeds, many samples can be made during the flat portion of the pulse. Averaging as many samples as possible enhances the resolution of the measurements and reduces noise. Many instruments have averaging filters that can be used to produce a single reading. If even greater accuracy is required, the measurement can be repeated over several pulses and the readings averaged to get a single reading. To obtain valid results using this method, the individual pulsed measurements should be made in quick succession to avoid variations in the readings due to changes in temperature or humidity.

Transient pulsed measurements (FIGURE 2) are performed by sampling the signal at high speed to create a signal vs. time waveform. An oscilloscope is often used for these measurements but they can also be made with traditional DC instruments by running the ADCs at high speed. Some DC instruments even include high-speed SAR type ADCs for performing transient pulsed measurements. Transient measurements are useful for investigating device behaviors like self-heating and charge trapping.

FIGURE 2. Transient pulse measurements.

FIGURE 2. Transient pulse measurements.

Instrumentation options

The simplest pulse measurement instrumentation option is a pulse generator to source the pulse combined with an oscilloscope to measure the pulse (FIGURE 3). Voltage measurements can be made by connecting a probe from the scope directly to the DUT; current measurements can be made by connecting a current probe around one of the DUT test leads. If a current probe is unavailable, a precision shunt resistor can be placed in series with the device and the voltage across the shunt measured with a standard probe, then converted to current using a math function in the scope. This simple setup offers a variety of advantages. Pulse generators provide full control over pulse width, pulse period, rise time and fall time. They are capable of pulse widths as narrow as 10 nanoseconds and rise and fall times as short as 2-3 nanoseconds. Oscilloscopes are ideal for transient pulse measurements because of their ability to sample the signal at very high speeds.

FIGURE 3. Pulse measurement using a pulse generator and an oscilloscope. Voltage is measured across the device with a voltage probe and current through the device is measured with a current probe.

FIGURE 3. Pulse measurement using a pulse generator and an oscilloscope. Voltage is measured across the device with a voltage probe and current through the device is measured with a current probe.

Although a simple pulse generator/oscilloscope combination is good for fast transient pulse measurements, it’s not appropriate for all pulse measurement applications. A scope’s measurement resolution is relatively low (8–12 bits). Because scopes are designed to capture waveforms, they’re not well suited for making pulse I-V measurements. Although the built-in pulse measure functions can help with measuring the level of a pulse, this represents only a single point on the I-V curve. Generating a complete curve with this setup would be time consuming, requiring either manual data collection or a lot of programming. Pulse generators are typically limited to outputting 10-20V max with a current delivery capability of only a couple hundred milliamps, which would limit this setup to lower power devices and/or lower power tests. Test setup can also be complex. Getting the desired voltage at the device requires impedance matching with the pulse generator. If a shunt resistor is used to measure current, then the voltage drop across this resistor must be taken into account as well.

Curve tracers were all-in-one instruments designed specifically for I-V characterization of 2- and 3-terminal power semiconductor devices. They featured high current and high voltage supplies for stimulating the device and a configurable voltage/ current source for stimulating the device’s control terminal, a built-in test fixture for making connections, a scope like display for real-time feedback, and a knob for controlling the magnitude of the output. However, Source measure unit (SMU) instruments (FIGURE 4) have now largely taken up the functions they once performed.

FIGURE 4. Model 2620B System SourceMeter SMU instrument.

FIGURE 4. Model 2620B System SourceMeter SMU instrument.

SMU instruments combine the source capabilities of a precision power supply with the measurement capabilities of a high accuracy DMM. Although originally designed for making extremely accurate DC measurements, SMU instruments have been enhanced to include pulse measurement capabilities as well. These instruments can source much higher currents in pulse mode than in DC mode. For example, the Keithley Model 2602B SourceMeter SMU instrument can output up to 3A DC and up to 10A pulsed. For applications that require even high currents, the Model 2651A SourceMeter SMU instrument can output up 20A DC or 50A pulsed. If two Model 2651As are configured in parallel, pulse current outputs up to 100A are possible.

SMU instruments can source both voltage and current with high accuracy thanks to an active feedback loop that monitors the output and adjusts it as necessary to achieve the programmed output value. They can even sense voltage remotely, directly at the DUT, using a second set of test leads, ensuring the correct voltage at the device. These instruments measure with high precision as well, with dual 28-bit delta-sigma or integrating-type ADCs. Using these ADCs along with their flexible sourcing engines, SMUs can perform very accurate pulse I-V measurement sweeps to characterize devices. Some, including the Model 2651A, also include two SAR-type ADCs that can sample at 1 mega-sample per second with 18-bit resolution, making them excellent for transient pulse measurements as well.

In addition, some SMU instruments offer excellent low current capability, with ranges as low as 100pA with 100aA resolution. Their wide dynamic range makes SMU instruments an excellent choice for both ON- and OFF-state device characterization. Also, because they combine sourcing and measurement in a single instrument, SMU instruments reduce the number of instruments involved, which not only simplifies triggering and programming but reduces the overall cost of test.

Although SMU instruments are often used for pulse measurements, they don’t operate in the same way as a typical pulse generator. For example, an SMU instrument’s rise and fall times cannot be controlled by the user; they depend on the instrument’s gain and bandwidth of the feedback loop. Because these loops are designed to generate little or no overshoot when stepping the source, the minimum width of the pulses they produce are not as short as those possible from a pulse generator. However, an SMU instrument can produce pulse widths as short as 50–100μs, which minimizes device self-heating.

The terminology used to describe a pulse when using SMU instruments differs slightly from that used with pulse generators. Rather than referring to the output levels in the pulse as amplitude and base or the high level and the low level, with SMU instruments, the high level is referred to as the pulse level and the low level as the bias level. The term bias level originates from the SMU’s roots in DC testing where one terminal of a device might be biased with a fixed level. Pulse width is still used with SMU instruments, but its definition is slightly different. Given that rise and fall times cannot be set directly and vary with the range in use and the load connected to the output, pulse width can’t be accurately defined by Full Width at Half Maximum (FWHM). (refer to the sidebar for more information on FWHM). Instead, for most SMU instruments, pulse width is defined as the time from the start of the rising edge to the start of the falling edge, points chosen because they are under the user’s control.

In other words, the user can set the pulse width by setting the time between when the source is told to go to the pulse level and then told to go back to the bias level.

FIGURE 5. A pulse measure unit card combines the capabilities of a pulse generator and a high resolution oscilloscope.

FIGURE 5. A pulse measure unit card combines the capabilities of a pulse generator and a high resolution oscilloscope.

Pulse measure units (PMUs) combine the capabilities of a pulse generator and a high-resolution oscilloscope, which are sometimes implemented as card-based solutions designed to plug into a test mainframe. Keithley’s Model 4225-PMU, designed for use with the Model 4200 Semiconductor Charac- terization System (FIGURE 5), is one example. It has two independent channels capable of sourcing up to 40V at up to 800mA. Like a standard pulse generator, users can define all parameters of the pulse shape. Pulse widths as narrow as 60ns and rise and fall times as short as 20ns make it well suited for characterizing devices with fast transients. A Segment Arb mode allows outputting multi-level pulse waveforms in separately defined segments, with separate voltage levels and durations for each. Each PMU channel is capable of measuring both current and voltage using two 14-bit 200MS/s ADCs per channel for a total of four ADCs per card. Additionally, all four ADCs are capable of sampling together synchronously at full speed. By combining a pulse generator with scope- like measurement capability in one instrument, a PMU can not only make high-resolution transient pulse measurements but also perform pulse I-V measurement sweeps easily using a spot mean method for enhanced resolution.

EGBERT WOELK, PH.D., is director of marketing at Dow Electronic Materials, North Andover, MA. ROGER LOO, PH.D., is a principal scientist at imec, Leuven, Belgium.

MEMS Industry Group (MIG) is bringing its popular MEMS & Sensors Technology Showcase to MEMS Executive Congress Europe for the first time. Selected from a pool of applicants, finalist companies will demo their MEMS/sensors-based applications as they vie for attendees’ votes.

“MEMS & Sensors Technology Showcase is unique in the MEMS/sensors industry, and that’s why it’s always been a crowd pleaser at the US version of this event,” said Karen Lightman, executive director, MEMS Industry Group. “Finalists include a wide array of products that demonstrate the enabling power of MEMS and sensors: portable odor detectors, touch-free vital signs’ systems, and gas/alcohol-detection monitors that work with mobile phones as well as non-invasive oxygen readers and motion-based energy harvesters. I am excited to see which company our audience crowns as winner.”

This year’s finalists include:

1

NeOse by Aryballe Technologies — Based on the combination of nano, biotech, IT and cognitive sciences, Aryballe develops innovative technologies, databases, software and devices applied to the identification, measurement and representation of smells and tastes.

The company’s main product, NeOse, will be launched in 2016 and should be the first universal portable odor detector (e-nose) on the market. As a personal device that connects to smartphones and databases, NeOse is able to recognize a wide spectrum of different odors.

2

The Touch-Free Life Care (TLC) System by BAM Labs – The TLC System frees patients from the encumbrance of wired medical monitoring devices. This touch-free digital health solution uses Freescale’s MPXV2010 pressure sensor, MCU and applications processor to provide comprehensive hardware support for data collection, networking and communications for non-intrusive health monitoring.

The TLC System tracks bio-signals without keeping patients tethered to bedside monitors.

3

MOX Gas Sensors by Cambridge CMOS Sensors – Cambridge CMOS Sensors Metal Oxide (MOX) gas sensors use MEMS Micro-hotplate technology to provide a unique silicon platform for gas sensing, enabling sensor miniaturization, low power consumption and ultra-fast response times.

For the MEMS & Sensors Technology Showcase, Cambridge CMOS Sensors will present a MOX sensor module connected to a mobile device, demonstrating superior gas detection for indoor air quality (IAQ) and Volatile Organic Compounds (VOCs). The company will also show how its MOX sensor module supports alcohol detection via breath analysis.

4

The Demox Reader by CSEM (Swiss Centre for Electronics and Microtechnology) —

Originally developed to monitor oxygen in real time in cell and tissue cultures, the Demox reader is a versatile device that enables oxygen measurements for many different applications. CSEM will demo a new Demox reader that can be used to assess air and water quality as well as to support process control of food and beverages. This compact device can be mounted on commercial microscopes that are regularly used to investigate living biological materials.

The Demox optical reader allows the rapid, efficient and non-invasive measurements of oxygen concentration in a wide range of materials.

5

EnerBee Rv2 by EnerBee — EnerBee technology is based on research developed to produce MEMS electric generators that create electricity from all kinds of movement. This includes motion at very low speeds where traditional generators become unusable.

Enerbee Rv2 is a highly efficient motion-based energy harvester that produces electricity independently from motion speed, and powers low-power devices in Internet of Things applications such as building automation, access control and smart objects.

MEMS & Sensors Technology Showcase takes place 9 March, 2015, 16:00-17:30 at Crowne Plaza Copenhagen Towers, Copenhagen, Denmark.

MIG Executive Director Karen Lightman will announce the winner during her closing remarks, 10 March, 2015 at 17:15.

By CHOWDARY YANAMADALA, Senior Vice President of Business Development, ChaoLogix, Gainesville, FL 

Data is ubiquitous today. It is generated, exchanged and consumed at unprecedented rates.

According to Gartner, Internet of Things connected devices (excluding PCs, tablets and smart phones) will grow to 26 billion devices worldwide by 2020—a 30-fold increase from 2009. Sales of these devices will add $1.9 trillion in economic value globally.

Indeed, one of the major benefits of the Internet of Things movement is the connectivity and accessibility of data; however, this also raises concerns about securely managing that data.

Managing data security in hardware

Data security involves essential steps of authentication and encryption. We need to authenticate data generation and data collection sources, and we need to preserve the privacy of the data.

The Internet of Things comprises a variety of components: hardware, embedded software and services associated with the “things.” Data security is needed at each level.

Hardware security is generally implemented in the chips that make up the “things.” The mathematical security of authentication and encryption algorithms is less of a concern because this is not new. The industry has addressed these concerns for several years.

Nonetheless, hackers can exploit implementation flaws in these chips. Side channel attacks (SCAs) are a major threat to data security within integrated circuits (ICs) that are used to hold sensitive data, such as identifying information and secret keys needed for authentication or encryption algorithms. Specific SCAs include differential power analysis (DPA) and differential electro magnetic analysis (DEMA).

There are many published and unpublished attacks on the security of chips deployed in the market, and SCA threats are rapidly evolving, increasing in potency and the ease of mounting the attacks.

These emerging threats render defensive techniques adopted by the IC manufacturers less potent over time, igniting a race between defensive and offensive (threat) techniques. For example, chips that deploy defensive techniques deemed sufficient in 2012 may be less effective in 2014 due to emerging threats. Once these devices are deployed, they become vulnerable to new threats.

Another challenge IC manufacturers face is the complexity of defensive techniques. Often times, defensive techniques that are algorithm or protocol specific are layered to address multiple targeted threats.

This “Band-Aid” approach is tedious and becomes unwieldy to manage. The industry must remember that leaving hardware vulnerable to SCA threats can significantly weaken data security. This vulnerability may manifest itself in the form of revenue loss (counterfeits of consumables), loss of privacy (compromised identification information), breach of authentication (rogue devices in the closed network) and more.

How to increase the permanence of security

A simplified way to look at the SCA problem is as a signal to noise issue. In this case, signal means sensitive data leaked through power signature. Noise is the ambient or manufactured noise added to the system to obfuscate the signal from being extracted from power signature.

Many defensive measures today concentrate on increasing noise in the system to obfuscate the signal. The challenge with this approach is that emerging statis- tical techniques are becoming adept at separating the signal from the noise, thereby decreasing the potency of the deployed defensive techniques.

One way to effectively deal with this problem is to ”weave security into the fabric of design.” SCA threats can be addressed at the source rather than addressing the symptoms. What if we can make the power signature agnostic of the data processed? What if we can build security into the building blocks of design? That would make the security more permanent and simplify its implementation.

A simplified approach of weaving security into the fabric of design involves leveraging a secure standard cell library that is hardened against SCA. Such a library would use analog design techniques to tackle the problem of SCA at the source, diminishing the SCA signal to make it difficult to extract from the power signature.

Leveraging standard cells should be simple since they are the basic building blocks of digital design. As an industry, we cannot afford to bypass these critical steps to defend our data.

Leading industry experts provide their perspectives on what to expect in 2015. 3D devices and 3D integration, rising process complexity and “big data” are among the hot topics.

Entering the 3D era

Ghanayem_SSteve Ghanayem, vice president, general manager, Transistor and Interconnect Group, Applied Materials

This year, the semiconductor industry celebrates the 50th anniversary of Moore’s Law. We are at the onset of the 3D era. We expect to see broad adoption of 3D FinFETs in logic and foundry. Investments in 3D NAND manufacturing are expanding as this technology takes hold. This historic 3D transformation impacting both logic and memory devices underscores the aggressive pace of technology innovation in the age of mobility. The benefits of going 3D — lower power consumption, increased processing performance, denser storage capacity and smaller form factors — are essential for the industry to enable new mobility, connectivity and Internet of Things applications.

The semiconductor equipment industry plays a major role in enabling this 3D transformation through new materials, capabilities and processes. Fabricating leading-edge 3D FinFET and NAND devices adds complexity in chip manufacturing that has soared with each node transition. The 3D structure poses unique challenges for deposition, etch, planarization, materials modification and selective processes to create a yielding device, requiring significant innovations in critical dimension control, structural integrity and interface preparation. As chips get smaller and more complex, variations accumulate while process tolerances shrink, eroding performance and yields. Chipmakers need cost-effective solutions to rapidly ramp device yield to maintain the cadence of Moore’s Law. Given these challenges, 2015 will be the year when precision materials engineering technologies are put to the test to demonstrate high-volume manufacturing capabilities for 3D devices.

Achieving excellent device performance and yield for 3D devices demands equipment engineering expertise leveraging decades of knowledge to deliver the optimal system architecture with wide process window. Process technology innovation and new materials with atomic-scale precision are vital for transistor, interconnect and patterning applications. For instance, transistor fabrication requires precise control of fin width, limiting variation from etching to lithography. Contact formation requires precision metal film deposition and atomic-level interface control, critical to lowering contact resistance. In interconnect, new materials such as cobalt are needed to improve gap fill and reliability of narrow lines as density increases with each technology node. Looking forward, these precision materials engineering technologies will be the foundation for continued materials-enabled scaling for many years to come.

Increasing process complexity and opportunities for innovation

trafasBrian Trafas, Chief Marketing Officer, KLA-Tencor Corporation

The 2014 calendar year started with promise and optimism for the semiconductor industry, and it concluded with similar sentiments. While the concern of financial risk and industry consolidation interjects itself at times to overshadow the industry, there is much to be positive about as we arrive in the new year. From increases in equipment spending and revenue in the materials market, to record level silicon wafer shipments projections, 2015 forecasts all point in the right direction. Industry players are also doing their part to address new challenges, creating strategies to overcome complexities associated with innovative techniques, such as multipatterning and 3D architectures.

The semiconductor industry continues to explore new technologies, including 3DIC, TSV, and FinFETs, which carry challenges that also happen to represent opportunities. First, for memory as well as foundry logic, the need for multipatterning to extend lithography is a key focus. We’re seeing some of the value of a traditional lithography tool shifting into some of the non-litho processing steps. As such, customers need to monitor litho and non-litho sources of error and critical defects to be able to yield successfully at next generation nodes.  To enable successful yields with decreasing patterning process windows, it is essential to address all sources of error to provide feed forward and feed backward correctly.

The transition from 2D to 3D in memory and logic is another focus area.  3D leads to tighter process margins because of the added steps and complexity.  Addressing specific yield issues associated with 3D is a great opportunity for companies that can provide value in addressing the challenges customers are facing with these unique architectures.

The wearable, intelligent mobile and IoT markets are continuing to grow rapidly and bring new opportunities. We expect the IoT will drive higher levels of semiconductor content and contribute to future growth in the industry. The demand for these types of devices will add to the entire value chain including semiconductor devices but also software and services.  The semiconductor content in these devices can provide growth opportunities for microcontrollers and embedded processors as well sensing semiconductor devices.

Critical to our industry’s success is tight collaboration among peers and with customers. With such complexity to the market and IC technology, it is very important to work together to understand challenges and identify where there are opportunities to provide value to customers, ultimately helping them to make the right investments and meet their ramps.

Controlling manufacturing variability key to success at 10nm

Rick_Gottscho_Lam_ResearchRichard Gottscho, Ph.D., Executive Vice President, Global Products, Lam Research Corporation 

This year, the semiconductor industry should see the emergence of chip-making at the 10nm technology node. When building devices with geometries this small, controlling manufacturing process variability is essential and most challenging since variation tolerance scales with device dimensions.

Controlling variability has always been important for improving yield and device performance. With every advance in technology and change in design rule, tighter process controls are needed to achieve these benefits. At the 22/20nm technology node, for instance, variation tolerance for CDs (critical dimensions) can be as small as one nanometer, or about 14 atomic layers; for the 10nm node, it can be less than 0.5nm, or just 3 – 4 atomic layers. Innovations that drive continuous scaling to sub-20nm nodes, such as 3D FinFET devices and double/quadruple patterning schemes, add to the challenge of reducing variability. For example, multiple patterning processes require more stringent control of each step because additional process steps are needed to create the initial mask:  more steps mean more variability overall. Multiple patterning puts greater constraints not only on lithography, but also on deposition and etching.

Three types of process variation must be addressed:  within each die or integrated circuit at an atomic level, from die to die (across the wafer), and from wafer to wafer (within a lot, lot to lot, chamber to chamber, and fab to fab). At the device level, controlling CD variation to within a few atoms will increasingly require the application of technologies such as atomic layer deposition (ALD) and atomic layer etching (ALE). Historically, some of these processes were deemed too slow for commercial production. Fortunately, we now have cost-effective solutions, and they are finding their way into volume manufacturing.

To complement these capabilities, advanced process control (APC) will be incorporated into systems to tune chemical and electrical gradients across the wafer, further reducing die-to-die variation. In addition, chamber matching has never been more important. Big data analytics and subsystem diagnostics are being developed and deployed to ensure that every system in a fab produces wafers with the same process results to atomic precision.

Looking ahead, we expect these new capabilities for advanced variability control to move into production environments sometime this year, enabling 10nm-node device fabrication.

2015: The year 3D-IC integration finally comes of age

SONY DSCPaul Lindner, Executive Technology Director, EV Group

2015 will mark an important turning point in the course of 3D-IC technology adoption, as the semiconductor industry moves 3D-IC fully out of development and prototyping stages onto the production floor. In several applications, this transition is already taking place. To date, at least a dozen components in a typical smart phone employing 3D-IC manufacturing technologies. While the application processor and memory in these smart devices continue to be stacked at a package level (POP), many other device components—including image sensors, MEMS, RF front end and filter devices—are now realizing the promise of 3D-IC, namely reduced form factor, increased performance and most importantly reduced manufacturing cost.

The increasing adoption of wearable mobile consumer products will also accelerate the need for higher density integration and reduced form factor, particularly with respect to MEMS devices. More functionality will be integrated both within the same device as well as within one package via 3D stacking. Nine-axis international measurement units (IMUs, which comprise three accelerometers, three gyroscopes and three magnetic axes) will see reductions in size, cost, power consumption and ease of integration.

On the other side of the data stream at data centers, expect to see new developments around 3D-IC technology coming to market in 2015 as well. Compound semiconductors integrated with photonics and CMOS will trigger the replacement of copper wiring with optical fibers to drive down power consumption and electricity costs, thanks to 3D stacking technologies. The recent introduction of stacked DRAM with high-performance microprocessors, such as Intel’s Knights Landing processor, already demonstrate how 3D-IC technology is finally delivering on its promises across many different applications.

Across these various applications that are integrating stacked 3D-IC architectures, wafer bonding will play a key role. This is true for 3D-ICs integrating through silicon vias (TSVs), where temporary bonding in the manufacturing flow or permanent bonding at the wafer-level is essential. It’s the case for reducing power consumption in wearable products integrating MEMS devices, where encapsulating higher vacuum levels will enable low-power operation of gyroscopes. Finally, wafer-level hybrid fusion bonding—a technology that permanently connects wafers both mechanically and electrically in a single process step and supports the development of thinner devices by eliminating adhesive thickness and the need for bumps and pillars—is one of the promising new processes that we expect to see utilized in device manufacturing starting in 2015.

2015: Curvilinear Shapes Are Coming

Aki_Fujimura_D2S_midresAki Fujimura, CEO, D2S

For the semiconductor industry, 2015 will be the start of one of the most interesting periods in the history of Moore’s Law. For the first time in two decades, the fundamental machine architecture of the mask writer is going to change over the next few years—from Variable Shaped Beam (VSB) to multi-beam. Multi-beam mask writing is likely the final frontier—the technology that will take us to the end of the Moore’s Law era. The write times associated with multi-beam writers are constant regardless of the complexity of the mask patterns, and this changes everything. It will open up a new world of opportunities for complex mask making that make trade-offs between design rules, mask/wafer yields and mask write-times a thing of the past. The upstream effects of this may yet be underappreciated.

While high-volume production of multi-beam mask writing machines may not arrive in time for the 10nm node, the industry is expressing little doubt of its arrival by the 7nm node. Since transitions of this magnitude take several years to successfully permeate through the ecosystem, 2015 is the right time to start preparing for the impact of this change.  Multi-beam mask writing enables the creation of very complex mask shapes (even ideal curvilinear shapes). When used in conjunction with optical proximity correction (OPC), inverse lithography technology (ILT) and pixelated masks, this enables more precise wafer writing with improved process margin.  Improving process margin on both the mask and wafer will allow design rules to be tighter, which will re-activate the transistor-density benefit of Moore’s Law.

The prospect of multi-beam mask writing makes it clear that OPC needs to yield better wafer quality by taking advantage of complex mask shapes. This clear direction for the future and the need for more process margin and overlay accuracy at the 10nm node aligns to require complex mask shapes at 10nm. Technologies such as model-based mask data preparation (MB-MDP) will take center stage in 2015 as a bridge to 10nm using VSB mask writing.

Whether for VSB mask writing or for multi-beam mask writing, the shapes we need to write on masks are increasingly complex, increasingly curvilinear, and smaller in minimum width and space. The overwhelming trend in mask data preparation is the shift from deterministic, rule-based, geometric, context-independent, shape-modulated, rectangular processing to statistical, simulation-based, context-dependent, dose- and shape-modulated, any-shape processing. We will all be witnesses to the start of this fundamental change as 2015 unfolds. It will be a very exciting time indeed.

Data integration and advanced packaging driving growth in 2015

mike_plisinski_hiMike Plisinski, Chief Operating Officer, Rudolph Technologies, Inc.

We see two important trends that we expect to have major impact in 2015. The first is a continuing investment in developing and implementing 3D integration and advanced packaging processes, driven not only by the demand for more power and functionality in smaller volumes, but also by the dramatic escalation in the number and density I/O lines per die. This includes not only through silicon vias, but also copper pillar bumps, fan-out packaging, hyper-efficient panel-based packaging processes that use dedicated lithography system on rectangular substrates. As the back end adopts and adapts processes from the front end, the lines that have traditionally separated these areas are blurring. Advanced packaging processes require significantly more inspection and control than conventional packaging and this trend is still only in its early stages.

The other trend has a broader impact on the market as a whole. As consumer electronics becomes a more predominant driver of our industry, manufacturers are under increasing pressure to ramp new products faster and at higher volumes than ever before. Winning or losing an order from a mega cell phone manufacturer can make or break a year, and those orders are being won based on technology and quality, not only price as in the past. This is forcing manufacturers to look for more comprehensive solutions to their process challenges. Instead of buying a tool that meets certain criteria of their established infrastructure, then getting IT to connect it and interpret the data and write the charts and reports for the process engineers so they can use the tool, manufacturers are now pushing much of this onto their vendors, saying, “We want you to provide a working tool that’s going to meet these specs right away and provide us the information we need to adjust and control our process going forward.” They want information, not just data.

Rudolph has made, and will continue to make, major investments in the development of automated analytics for process data. Now more than ever, when our customer buys a system from us, whatever its application – lithography, metrology, inspection or something new, they also want to correlate the data it generates with data from other tools across the process in order to provide more information about process adjustments. We expect these same customer demands to drive a new wave of collaboration among vendors, and we welcome the opportunity to work together to provide more comprehensive solutions for the benefit of our mutual customers.

Process Data – From Famine to Feast

Jack Hager Head ShotJack Hager, Product Marketing Manager, FEI

As shrinking device sizes have forced manufacturers to move from SEM to TEM for analysis and measurement of critical features, process and integration engineers have often found themselves having to make critical decisions using meagre rations of process data. Recent advances in automated TEM sample preparation, using FIBs to prepare high quality, ultra-thin site-specific samples, have opened the tap on the flow of data. Engineers can now make statistically-sound decisions in an environment of abundant data. The availability of fast, high-quality TEM data has whet their appetites for even more data, and the resulting demand is drawing sample preparation systems, and in some cases, TEMs, out of remote laboratories and onto the fab floor or in a “near-line” location. With the high degree of automation of both the sample preparation and TEM, the process engineers, who ultimately consume the data, can now own and operate the systems that generate this data, thus having control over the amount of data created.

The proliferation of exotic materials and new 3D architectures at the most advanced nodes has dramatically increased the need for fast, accurate process data. The days when performance improvements required no more than a relatively simple “shrink” of basically 2D designs using well-understood processes are long gone. Complex, new processes require additional monitoring to aide in process control and failure analysis troubleshooting. Defects, both electrical and physical, are not only more numerous, but typically smaller and more varied. These defects are often buried below the exposed surface which limits traditional inline defect-monitoring equipment effectiveness. This has resulted in renewed challenges in diagnosing their root causes. TEM analysis now plays a more prevalent role providing defect insights that allow actionable process changes.

While process technologies have changed radically, market fundamentals have not. First to market still commands premium prices and builds market share. And time to market is determined largely by the speed with which new manufacturing processes can be developed and ramped to high yields at high volumes. It is in these critical phases of development and ramp that the speed and accuracy of automated sample preparation and TEM analysis is proving most valuable. The methodology has already been adopted by leading manufacturers across the industry – logic and memory, IDM and foundry. We expect the adoption to continue, and with it, the migration of sample preparation and advanced measurement and analytical systems into the fab. 

Diversification of processes, materials will drive integration and customization in sub-fab

Kate Wilson PhotoKate Wilson, Global Applications Director, Edwards

We expect the proliferation of new processes, materials and architectures at the most advanced nodes to drive significant changes in the sub fab where we live. In particular, we expect to see a continuing move toward the integration of vacuum pumping and abatement functions, with custom tuning to optimize performance for the increasingly diverse array of applications becoming a requirement. There is an increased requirement for additional features around the core units such as thermal management, heated N2 injection, and precursor treatment pre- and post-pump that also need to be managed.

Integration offers clear advantages, not only in cost savings but also in safety, speed of installation, smaller footprint, consistent implementation of correct components, optimized set-ups and controlled ownership of the process effluents until they are abated reliably to safe levels. The benefits are not always immediately apparent. Just as effective integration is much more than simply adding a pump to an abatement system, the initial cost of an integrated system is more than the cost of the individual components. The cost benefits in a properly integrated system accrue primarily from increased efficiencies and reliability over the life of the system, and the magnitude of the benefit depends on the complexity of the process. In harsh applications, including deposition processes such as CVD, Epi and ALD, integrated systems provide significant improvements in uptime, service intervals and product lifetimes as well as significant safety benefits.

The trend toward increasing process customization impacts the move toward integration through its requirement that the integrator have detailed knowledge of the process and its by-products. Each manufacturer may use a slightly different recipe and a small change in materials or concentrations can have a large effect on pumping and abatement performance. This variability must be addressed not only in the design of the integrated system but also in tuning its operation during initial commissioning and throughout its lifetime to achieve optimal performance. Successful realization of the benefits of integration will rely heavily on continuing support based on broad application knowledge and experience.

Giga-scale challenges will dominate 2015

Dr. Zhihong Liu

Dr. Zhihong Liu, Executive Chairman, ProPlus Design Solutions, Inc.

It wasn’t all that long ago when nano-scale was the term the semiconductor industry used to describe small transistor sizes to indicate technological advancement. Today, with Moore’s Law slowing down at sub-28nm, the term more often heard is giga-scale due to a leap forward in complexity challenges caused in large measure by the massive amounts of big data now part of all chip design.

Nano-scale technological advancement has enabled giga-sized applications for more varieties of technology platforms, including the most popular mobile, IoT and wearable devices. EDA tools must respond to such a trend. On one side, accurately modeling nano-scale devices, including complex physical effects due to small geometry sizes and complicated device structures, has increased in importance and difficulties. Designers now demand more from foundries and have higher standards for PDK and model accuracies. They need to have a deep understanding of the process platform in order to  make their chip or IP competitive.

On the other side, giga-scale designs require accurate tools to handle increasing design size. The small supply voltage associated with technology advancement and low-power applications, and the impact of various process variation effects, have reduced available design margins. Furthermore, the big circuit size has made the design sensitive to small leakage current and small noise margin. Accuracy will soon become the bottleneck for giga-scale designs.

However, traditional design tools for big designs, such as FastSPICE for simulation and verification, mostly trade-off accuracy for capacity and performance. One particular example will be the need for accurate memory design, e.g., large instance memory characterization, or full-chip timing and power verification. Because embedded memory may occupy more than 50 percent of chip die area, it will have a significant impact on chip performance and power. For advanced designs, power or timing characterization and verification require much higher accuracy than what FastSPICE can offer –– 5 percent or less errors compared to golden SPICE.

To meet the giga-scale challenges outlined above, the next-generation circuit simulator must offer the high accuracy of a traditional SPICE simulator, and have similar capacity and performance advantages of a FastSPICE simulator. New entrants into the giga-scale SPICE simulation market readily handle the latest process technologies, such as 16/14nm FinFET, which adds further challenges to capacity and accuracy.

One giga-scale SPICE simulator can cover small and large block simulations, characterization, or full-chip verifications, with a pure SPICE engine that guarantees accuracy, and eliminates inconsistencies in the traditional design flow.  It can be used as the golden reference for FastSPICE applications, or directly replace FastSPICE for memory designs.

The giga-scale era in chip design is here and giga-scale SPICE simulators are commercially available to meet the need.

Nanoengineers at the University of California, San Diego have tested a temporary tattoo that both extracts and measures the level of glucose in the fluid in between skin cells. This first-ever example of the flexible, easy-to-wear device could be a promising step forward in noninvasive glucose testing for patients with diabetes.

The sensor was developed and tested by graduate student Amay Bandodkar and colleagues in Professor Joseph Wang’s laboratory at the NanoEngineering Department and the Center for Wearable Sensors at the Jacobs School of Engineering at UC San Diego. Bandodkar said this “proof-of-concept” tattoo could pave the way for the Center to explore other uses of the device, such as detecting other important metabolites in the body or delivering medicines through the skin.

Nanoengineers at the University of California, San Diego have tested a temporary tattoo that both extracts and measures the level of glucose in the fluid in between skin cells. CREDIT Jacobs School of Engineering/UC San Diego

Nanoengineers at the University of California, San Diego have tested a temporary tattoo that both extracts and measures the level of glucose in the fluid in between skin cells. CREDIT: Jacobs School of Engineering/UC San Diego

At the moment, the tattoo doesn’t provide the kind of numerical readout that a patient would need to monitor his or her own glucose. But this type of readout is being developed by electrical and computer engineering researchers in the Center for Wearable Sensors. “The readout instrument will also eventually have Bluetooth capabilities to send this information directly to the patient’s doctor in real-time or store data in the cloud,” said Bandodkar.

The research team is also working on ways to make the tattoo last longer while keeping its overall cost down, he noted. “Presently the tattoo sensor can easily survive for a day. These are extremely inexpensive–a few cents–and hence can be replaced without much financial burden on the patient.”

The Center “envisions using these glucose tattoo sensors to continuously monitor glucose levels of large populations as a function of their dietary habits,” Bandodkar said. Data from this wider population could help researchers learn more about the causes and potential prevention of diabetes, which affects hundreds of millions of people and is one of the leading causes of death and disability worldwide.

People with diabetes often must test their glucose levels multiple times per day, using devices that use a tiny needle to extract a small blood sample from a fingertip. Patients who avoid this testing because they find it unpleasant or difficult to perform are at a higher risk for poor health, so researchers have been searching for less invasive ways to monitor glucose.

In their report in the journal Analytical Chemistry, Wang and his co-workers describe their flexible device, which consists of carefully patterned electrodes printed on temporary tattoo paper. A very mild electrical current applied to the skin for 10 minutes forces sodium ions in the fluid between skin cells to migrate toward the tattoo’s electrodes. These ions carry glucose molecules that are also found in the fluid. A sensor built into the tattoo then measures the strength of the electrical charge produced by the glucose to determine a person’s overall glucose levels.

“The concentration of glucose extracted by the non-invasive tattoo device is almost hundred times lower than the corresponding level in the human blood,” Bandodkar explained. “Thus we had to develop a highly sensitive glucose sensor that could detect such low levels of glucose with high selectivity.”

A similar device called GlucoWatch from Cygnus Inc. was marketed in 2002, but the device was discontinued because it caused skin irritation, the UC San Diego researchers note. Their proof-of-concept tattoo sensor avoids this irritation by using a lower electrical current to extract the glucose.

Wang and colleagues applied the tattoo to seven men and women between the ages of 20 and 40 with no history of diabetes. None of the volunteers reported feeling discomfort during the tattoo test, and only a few people reported feeling a mild tingling in the first 10 seconds of the test.

To test how well the tattoo picked up the spike in glucose levels after a meal, the volunteers ate a carb-rich meal of a sandwich and soda in the lab. The device performed just as well at detecting this glucose spike as a traditional finger-stick monitor.

The researchers say the device could be used to measure other important chemicals such as lactate, a metabolite analyzed in athletes to monitor their fitness. The tattoo might also someday be used to test how well a medication is working by monitoring certain protein products in the intercellular fluid, or to detect alcohol or illegal drug consumption.

SUNY Polytechnic Institute (SUNY Poly) yesterday announced the SUNY Board of Trustees has appointed Dr. Alain Kaloyeros as the founding President of SUNY Poly.

“Dr. Alain Kaloyeros has led SUNY’s College of Nanoscale Science and Engineering since its inception, helping to make this first-of-its-kind institution a global model and position New York State as a leader in the nanotechnology-driven economy of the 21st century,” said SUNY Board Chairman H. Carl McCall. “It is only fitting that Dr. Kaloyeros be the one to build that model and bring it to scale through the continued development and expansion of SUNY Polytechnic Institute.”

“As the visionary who built CNSE into a world-class, high-tech, and globally recognized academic and economic development juggernaut, Dr. Alain Kaloyeros is the clear choice to lead SUNY Polytechnic Institute into the future,” said SUNY Chancellor Nancy L. Zimpher. “The unprecedented statewide expansion of the campus’ unique model and continued strong partnership with Governor Andrew Cuomo is testament to SUNY’s promise as New York’s economic engine and stature as an affordable, world-class educational institution. I am confident that, as its president, Dr. Kaloyeros will continue to build on SUNY Poly’s success and contributions to New York.”

“SUNY Polytechnic Institute is a revolutionary discovery and education model with two coequal campuses in Utica and Albany, and a key component of Governor Cuomo’s vision for high-tech innovation, job creation, and economic development in New York State.  I am privileged and humbled to be selected for the honor of leading this world-class institution and its talented and dedicated faculty, staff, and students,” said Dr. Kaloyeros.  “I would like to extend my sincere gratitude to the Governor, Chairman Carl McCall, the SUNY Board of Trustees, and Chancellor Nancy Zimpher for their continued confidence and support.”

Dr. Kaloyeros received his Ph.D. in Experimental Condensed Matter Physics from the University of Illinois at Urbana-Champaign in 1987.  A year later, Governor Mario M. Cuomo recruited Dr. Kaloyeros under the SUNY Graduate Research Initiative.  Since then, Dr. Kaloyeros has been actively involved in the development and implementation of New York’s high-tech strategy to become a global leader in the nanotechnology-driven economy of the 21st Century.

A critical cornerstone of New York’s high-technology strategy has been the establishment of the Colleges of Nanoscale Science and Engineering (CNSE) at SUNY Poly as a truly global resource that enables pioneering research and development, technology deployment, education, and commercialization for the international nanoelectronics industry.  CNSE was originally founded in April 2004 in response to the rapid changes and evolving needs in the educational and research landscapes brought on by the emergence of nanotechnology.  Under Dr. Kaloyeros’ leadership, CNSE has generated over $20B in public and private investments.

In 2014, CNSE merged with the SUNY Institute of Technology to form SUNY Poly, which today represents the world’s most advanced university-driven research enterprise, offering students a one-of-a-kind academic experience and providing over 300 corporate partners with access to an unmatched ecosystem for leading-edge R&D and commercialization of nanoelectronics and nanotechnology innovations.

 

The explosive expansion of the Internet of things (IoT) is driving rapid demand growth for microelectromechanical systems (MEMS) devices in areas including asset-tracking systems, smart grids and building automation.

Worldwide market revenue for MEMS directly used in industrial IoT equipment will rise to $120 million in 2018, up from $16 million in 2013, according to IHS Technology (NYSE: IHS). Additional MEMS also will be used to support the deployment of the IoT, such as devices employed in data centers. This indirect market for industrial IoT MEMS will increase to $214 million in 2018, up from $43 million in 2013.

The figure below presents the IHS forecast of global MEMS revenue from direct and indirect IoT uses.

Global market shipments for industrial IoT equipment are expected to expand to 7.3 billion units in 2025, up from 1.8 billion in 2013. The industrial IoT market is a diverse area, comprising equipment such as nodes, controllers and infrastructure, and used in markets ranging from building automation to commercial transport, smart cards, industrial automation, lighting and health. Such gear employs a range of MEMS device types including accelerometers, pressure sensors, timing components and microphones.

“The Internet of things is sometimes called the machine-to-machine (M2M) revolution, and one important class of machines—MEMS—will play an essential role in expansion of the boom of the industrial IoT segment in the coming years,” said Jeremie Bouchaud, director and senior principal analyst for MEMS and sensors at IHS. “MEMS sensors allow equipment to gather and digitize real-world data that then can be shared on the Internet. The IoT represents a major new growth opportunity for the MEMS market.”

More information on the topic can be found in the report entitled “Internet of Things begins to impact High-Value MEMS” from the MEMS & Sensors service of IHS.

Industrial IoT applications for MEMS

Building automation will generate the largest volumes for MEMS and other types of sensors in the industrial IoT market.

Asset tracking is the second-largest opportunity for sensors in industrial IoT. This segment will drive demand for large volumes of MEMS accelerometers and pressure sensors.

The smart grid also will require various types of MEMS, including inclinometers to monitor high-voltage power lines as well as accelerometers and flow sensors in smart meters.

Other major segments of the industrial IoT market include smart cities, smart factories, seismic monitoring, and drones and robotics.

MEMS types

Accelerometers and pressure sensors account for most of the MEMS shipments for direct industrial IoT applications in areas including building automation, agriculture and medical. MEMS timing devices in smart meters and microphones used in smart homes and smart cities will be next in terms of volume.

Indirect benefits

To support the deluge of data that IoT will generate, major investments will be required in the backbone infrastructure of the Internet, including data centers. This, in turn, will drive the indirect demand for MEMS used in such infrastructure.

Data centers will spur demand for optical MEMS, especially optical cross connects and wavelength selective switches. Big data operations also will require large quantities of integrated circuits (ICs) for memory. The testing of memory ICs makes use of MEMS wafer probe cards.

IoT Market

Worldwide semiconductor market revenue is on track to achieve a 9.4 percent expansion this year, with broad-based growth across multiple chip segments driving the best industry performance since 2010.

Global revenue in 2014 is expected to total $353.2 billion, up from $322.8 billion in 2013, according to a preliminary estimate from IHS Technology (NYSE: IHS). The nearly double-digit-percentage increase follows respectable growth of 6.4 percent in 2013, a decline of more than 2.0 percent in 2012 and a marginal increase of 1.0 percent in 2011. The performance in 2014 represents the highest rate of annual growth since the 33 percent boom of 2010.

“This is the healthiest the semiconductor business has been in many years, not only in light of the overall growth, but also because of the broad-based nature of the market expansion,” said Dale Ford, vice president and chief analyst at IHS Technology. “While the upswing in 2013 was almost entirely driven by growth in a few specific memory segments, the rise in 2014 is built on a widespread increase in demand for a variety of different types of chips. Because of this, nearly all semiconductor suppliers can enjoy good cheer as they enter the 2014 holiday season.”

More information on this topic can be found in the latest release of the Competitive Landscaping Tool from the Semiconductors & Components service at IHS.

Widespread growth

Of the 28 key sub-segments of the semiconductor market tracked by IHS, 22 are expected to expand in 2014. In contrast, only 12 sub-segments of the semiconductor industry grew in 2013.

Last year, the key drivers of the growth of the semiconductor market were dynamic random access memory (DRAM) and data flash memory. These two memory segments together grew by more than 30 percent while the rest of the market only expanded by 1.5 percent.

This year, the combined revenue for DRAM and data flash memory is projected to rise about 20 percent. However, growth in the rest of the market will swell by 6.7 percent to support the overall market increase of 9.4 percent.

In 2013, only eight semiconductor sub-segments grew by 5 percent or more and only three achieved double-digit growth. In 2014, over half of all the sub-segments—i.e., 15—will grow by more than 5 percent and eight markets will grow by double-digit percentages.

This pervasive growth is delivering general benefits to semiconductor suppliers, with 70 percent of chipmakers expected to enjoy revenue growth this year, up from 53 percent in 2013.

The figure below presents the growth of the DRAM and data flash segments compared to the rest of the semiconductor market in 2013 and 2014.

2014-12-18_Semi_Sectors_Growth

Semiconductor successes

The two market segments enjoying the strongest and most consistent growth in the last two years are DRAM and light-emitting diodes (LEDs). DRAM revenue will climb 33 percent for two years in a row in 2013 and 2014. This follows often strong declines in DRAM revenue in five of the last six years.

The LED market is expected to grow by more than 11 percent in 2014. This continues an unbroken period of growth for LED revenues stretching back at least 13 years.

Major turnarounds are occurring in the analog, discrete and microprocessor markets as they will swing from declines to strong growth in every sub-segment. Most segments will see their growth improve by more than 10 percent, compared to the declines experienced in 2013.

Furthermore, programmable logic device (PLD) and digital signal processor (DSP) application-specific integrated circuits (ASICs) will experience dramatic improvements in growth. PLD revenue in 2014 will grow by 10.2 percent compared to 2.1 percent in 2013, and DSP ASICs will rise by 3.8 percent compared to a 31.9 percent collapse in 2013.

Moving on up

Among the top 20 semiconductor suppliers, MediaTek and Avago Technologies attained the largest revenue growth and rise in the rankings in 2014. Both companies benefited from significant acquisitions.

MediaTek is expected to jump up five places to the 10th rank and become the first semiconductor company headquartered in Taiwan to break into the Top 10. Avago Technologies is projected to jump up eight positions in the rankings to No. 15.

The strongest growth by a semiconductor company based purely on organic revenue increase is expected to be achieved by SK Hynix, with projected growth of nearly 23 percent.

No. 13-ranked Infineon has announced its plan to acquire International Rectifier. If that acquisition is finalized in 2014 the combined companies would jump to No. 10 in the overall rankings and enjoy 16 percent combined growth.

The table below presents the preliminary IHS ranking of the world’s top 20 semiconductor suppliers in 2013 and 2014 based on revenue.

2014-12-18_Semi_Ranking_Final

Troubles for consumer electronics and Japan

Semiconductor revenue in 2014 will grow in five of the six major semiconductor application end markets, i.e. data processing, wired communications, wireless communications, automotive electronics and industrial electronics. The only market segment experiencing a decline will be consumer electronics. Revenue will expand by double-digit percentages in four of the six markets.

Japan continues to struggle, and is the only worldwide region that will see a decline in semiconductor revenues this year. The other three geographies—Asia-Pacific, the Americas and the Europe, Middle East and Africa (EMEA) region—will see healthy growth. The world will be led by led by Asia-Pacific which will post an expected revenue increase of 12.5 percent.

By DAVE HEMKER, Senior Vice President and Chief Technology Officer, Lam Research Corp.

Given the current buzz around the Internet of Things (IoT), it is easy to lose sight of the challenges
– both economic and technical. On the economic side is the need to cost-effectively manufacture up to a trillion sensors used to gather data, while on the technical side, the challenge involves building out the infrastructure. This includes enabling the transmission, storage, and analysis of volumes of data far exceeding anything we see today. These divergent needs will drive the semiconductor equipment industry to provide very different types of manufacturing solutions to support the IoT.

In order to fulfill the promise of the IoT, sensor technology will need to become nearly ubiquitous in our businesses, homes, electronic products, cars, and even our clothing. Per-unit costs for sensors will need to be kept very low to ensure the technology is economically viable. To support this need, trailing-edge semiconductor manufacturing capabilities provide a viable option since fully depreciated wafer processing equipment can produce chips cost efficiently. For semiconductor equipment suppliers, this translates into additional sales of refurbished and productivity-focused equipment and upgrades that improve yield, throughput, and running costs. In addition to being produced inexpensively, sensors intended for use in the IoT will need to meet several criteria. First, they need to operate on very low amounts of power. In fact, some may even be self-powered via MEMS (microelectromechanical systems)-based oscillators or the collection of environmental radio frequency energy, also known as energy harvesting/scavenging. Second, they will involve specialized functions, for example, the ability to monitor pH or humidity. Third, to enable the transmission of data collected to the supporting infrastructure, good wireless communications capabilities will be important. Finally, sensors will need to be small, easily integrated into other structures – such as a pane of glass, and available in new form factors – like flexible substrates for clothing. Together, these new requirements will drive innovation in chip technology across the semiconductor industry’s ecosystem.

The infrastructure needed to support the IoT, in contrast, will require semiconductor performance to continue its historical advancement of doubling every 18-24 months. Here, the challenges are a result of the need for vast amounts of networking, storage in the Cloud, and big data analysis. Additionally, many uses for the IoT will involve risks far greater than those that exist in today’s internet. With potential medical and transportation applications, for example, the results of data analysis performed in real time can literally be a matter of life or death. Likewise, managing the security and privacy of the data being generated will be paramount. The real-world nature of things also adds an enormous level of complexity in terms of predictive analysis.

Implementing these capabilities and infrastructure on the scale imagined in the IoT will require far more powerful memory and logic devices than are currently available. This need will drive the continued extension of Moore’s Law and demand for advanced semiconductor manufacturing capability, such as atomic-scale wafer processing. Controlling manufacturing process variability will also become increasingly important to ensure that every device in the new, interconnected world operates as expected.

With development of the IoT, semiconductor equipment companies can look forward to opportunities beyond communications and computing, though the timing of its emergence is uncertain. For wafer processing equipment suppliers in particular, new markets for leading-edge systems used in the IoT infrastructure and productivity-focused upgrades for sensor manufacturing are expected to develop.