Tag Archives: letter-pulse-top

Common pulsed measurement challenges are defined.

In case you missed it, Part 1 is available here.

BY DAVID WYBAN, Keithley Instruments, a Tektronix Company, Solon, Ohio

For SMU and PMU users, an issue that sometimes arises when making transient pulse measurements is the presence of “humps” (FIGURE 1) in the captured current waveform at the rising and falling edges of the voltage pulse. These humps are caused by capacitances in the system originating from the cabling, the test fixture, the instrument, and even the device itself. When the voltage being output is changed, the stray capacitances in the system must be either charged or discharged and the charge current for this either flows out of or back into the instrument. SMUs and PMUs measure current at the instrument, not at the DUT, so the instrument measures these current flows while a scope probe at the device does not.

FIGURE 1. Humps in the captured current (red) waveform at the rising and falling edges of the voltage pulse.

FIGURE 1. Humps in the captured current (red) waveform at the rising and falling edges of the voltage pulse.

This phenomenon is seen most often when the change in voltage is large or happens rapidly and the current through the device itself is low. The higher the voltage of the pulse or the faster the rising and falling edges, the larger the current humps will be. For SMUs with rise times in the tens of microseconds, these humps are usually only seen when the voltages are hundreds or even thousands of volts and the current through the device is only tens of microamps or less. However, for PMUs where the rise times are often less than 1μs, these humps can become noticeable on pulses of only a couple of volts, even when the current through the device is as high as several milliamps.
Although these humps in the current waveform may seem like a big problem, they are easy to eliminate. The humps are the result of the current being measured at the high side of the device where the voltage is changing. Adding a second SMU or PMU at the low side of the device to measure current will make these humps go away because at the low side of the device the voltage does not change so there’s no charge or discharge currents flowing and the current measured at the instrument will match the current at the device. If this isn’t an option, this problem can be minimized by reducing the stray capacitance in the system by reducing the length of the cables. Shorter cables equal less stray capacitance, which reduces the size of the humps in the current waveform.

The next common pulse measurement issue is test lead resistance. As test currents get higher, the impact of this resistance becomes increasingly significant. FIGURE 2 shows an SMU that is performing a pulse I-V measurement at 2V across a 50mΩ load. Based on Ohm’s Law, one might expect to measure a current through the device of 40A, but when the test is actually performed, the level of current measured is only 20A. That “missing” 20A is the result of test lead resis- tance. In fact, we were not pulsing 2V into 50mΩ but into 100mΩ instead, with 25mΩper test lead. With 50mΩ of lead resistance, half of the output voltage sourced was dropped in the test leads and only half of it ever reached the device.

FIGURE 2. Impact of test lead resistance.

FIGURE 2. Impact of test lead resistance.

To characterize the device correctly, it’s essential to know not only the current through the device but the actual voltage at the device. On SMUs this is done by using remote voltage sensing. Using a second set of test leads allows the instrument to sense the voltage directly at the device; because almost no current flows through these leads, the voltage fed back to the instrument will match the voltage at the device. Also, because these leads feed the voltage at the device directly back into the SMU’s feedback loop, the SMU can compensate for the voltage drop across the test leads by outputting a higher voltage at its output terminals.

Although SMUs can use remote sensing to compensate for voltage drops in the test leads, there is a limit to how much drop it can compensate for. For most SMUs, this maximum drop is about 3V/lead. If the voltage drop per lead reaches or exceeds this limit, strange things can start happening. The first thing is that the rise and fall times of the voltage pulse slow down, significantly increasing the time required to make a settled measurement. Given enough time for the pulse to settle, the voltage measurements may come back as the expected value, but the measured current will be lower than expected because the SMU is actually sourcing a lower voltage at the DUT than the level that it is programmed to source.

If you exceed the source-sense lead drop while sourcing current, a slightly different set of strange behaviors may occur. The current measurement will come back as the expected value and will be correct because current is measured internally and this measurement is not affected by lead drop, but the voltage reading will be higher than expected. In transient pulse measurements, you may even see the point at which the source-sense lead drop limit was exceeded as the measured voltage suddenly starts increasing again after it appeared to be settling.

These strange behaviors can be difficult to detect in the measured data if you do not know what voltage to expect from your device. Therefore, inspecting your pulse waveforms fully when validating your test system is essential.

Minimizing test lead resistance is essential to ensuring quality pulse measurements. There are two ways to do this:

Minimize the length of the test leads. Wire resistance increases at a rate that’s directly proportional to the length of the wire. Doubling the wire’s length doubles the resis- tance. Keeping leads lengths no greater than 3 meters is highly recommended for high current pulse applications.

Use wire of the appropriate diameter or gauge for the current being delivered. The resistance of a wire is also directly proportional to the cross sectional area of the wire. Increasing the diameter, or reducing the gauge, of the wire increases this area and reduces the resistance. For pulse applications up to 50A, a wire gauge of no greater than 12 AWG is recommended; for applications up to 100A, it’s best to use no greater than 10 gauge.

Excessive test lead inductance is another common issue. In DC measurements, test lead inductance is rarely considered because it has little effect on the measurements. However, in pulse measurements, lead inductance has a huge effect and can play havoc with a system’s ability to take quality measurements.

FIGURE 3. Humps in the voltage waveform of transient pulse measurements due to test system inductance.

FIGURE 3. Humps in the voltage waveform of transient pulse measurements due to test system inductance.

Humps in the voltage waveform of transient pulse measurements (FIGURE 3) are a common problem when generating current pulses. Just as with humps in the current waveforms, these humps can be seen in the data from the instrument but are nowhere to be seen when measured at the device with an oscilloscope. These humps are the result of the additional voltage seen at the instrument due to inductance in the cabling between the instrument and th

Equation 1

Equation 1

Equation 1 describes the relation between inductance and voltage. With this equation, we can see that for a given change in current over change in time (di over dt), the larger the inductance L is, the larger the resulting voltage will be. This equation also tells us that for a fixed inductance L, the larger the change in current or the smaller the change in time, the larger the resulting voltage will be. This means that the larger the pulse and or the faster the rise and falls times, the bigger the voltage humps will be.

To remedy this problem, instruments like SMUs offer remote voltage sensing, allowing them to measure around this lead inductance and measure the voltage directly at the device. However, as with excessive lead resistance, excessive lead inductance can also cause a problem for SMUs. If the inductance is large enough and causes the source-sense lead drop to exceed the SMU’s limit, transient pulse measurement data will have voltage measurement errors on the rising and falling edges similar to the ones seen when lead resistance is too large. Pulse I-V measurements are generally unaffected by lead inductance because the measurements are taken during the flat portion of the pulse where the current is not changing. However, excessive lead inductance will slow the rising and falling edges of voltage pulses and may cause ringing on current pulses, thereby requiring larger pulse widths to make a good settled pulse I-V measurement.

The Anatomy of a Pulse The amplitude and base describe the height of the pulse in the pulse waveform. Base describes the DC offset of the waveform from 0. This is the level the waveform will be both before and after the pulse. Amplitude is the level of the waveform relative to the base level and has an absolute value that is equal to the base plus amplitude. For example, a pulse waveform with a base of 1Vand an amplitude of 2V would have a low level of 1V and a high level of 3V. Pulse width is the time that the pulse signal is applied. It is commonly defined as the width in time of the pulse at half maximum also known as Full Width at Half Maximum (FWHM). This industry standard definition means the pulse width is measured where the pulse height is 50% of the amplitude. Pulse period is the length in time of the entire pulse waveform before it is repeated and can easily be measured by measuring the time from the start of one pulse to the next. The ratio of pulse width over pulse period is the duty cycle of the pulse waveform. A pulse’s rise time and fall time are the times it takes for the waveform to transition from the low level to the high level and from the high level back down to the low level. The industry standard way to measure the rise time is to measure the time it takes the pulse waveform to go from 10% amplitude to 90% amplitude on the rising edge. Fall time is defined as the time it takes for the waveform to go from 90% amplitude to 10% amplitude on the falling edge.

The Anatomy of a Pulse
The amplitude and base describe the height of the pulse in the pulse waveform. Base describes the DC offset of the waveform from 0. This is the level the waveform will be both before and after the pulse. Amplitude is the level of the waveform relative to the base level and has an absolute value that is equal to the base plus amplitude. For example, a pulse waveform with a base of 1Vand an amplitude of 2V would have a low level of 1V and a high level of 3V.
Pulse width is the time that the pulse signal is applied. It is commonly defined as the width in time of the pulse at half maximum also known as Full Width at Half Maximum (FWHM). This industry standard definition means the pulse width is measured where the pulse height is 50% of the amplitude.
Pulse period is the length in time of the entire pulse waveform before it is repeated and can easily be measured by measuring the time from the start of one pulse to the next.
The ratio of pulse width over pulse period is the duty cycle of the pulse waveform.
A pulse’s rise time and fall time are the times it takes for the waveform to transition from the low level to the high level and from the high level back down to the low level. The industry standard way to measure the rise time is to measure the time it takes the pulse waveform to go from 10% amplitude to 90% amplitude on the rising edge. Fall time is defined as the time it takes for the waveform to go from 90% amplitude to 10% amplitude on the falling edge.

Although SMUs are able to compensate for some lead inductance, PMUs have no compensation features, so the effects of inductance must be dealt with directly, such as by:

  • Reducing the size of the change in current by reducing the magnitude of the pulse.
  • Increasing the length of the transition times by increasing the rise and fall times.
  • Reducing the inductance in the test leads

Depending on the application or even the instrument, the first two measures are usually infeasible, which leaves reducing the inductance in the test leads. The amount of inductance in a set of test leads is proportionate to the loop area between the HI and LO leads. So, in order to reduce the inductance in the leads and therefore reduce the size of the humps, we must reduce the loop area, which is easily done by simply twisting the leads together to create a twisted pair or by using coaxial cable. Loop area can be reduced further by simply reducing the length of the cable.

The global semiconductor materials market increased 3 percent in 2014 compared to 2013 while worldwide semiconductor revenues increased 10 percent. Revenues of $44.3 billion mark the first increase in the semiconductor materials market since 2011.

Total wafer fabrication materials and packaging materials were $24.0 billion and $20.4 billion, respectively. Comparable revenues for these segments in 2013 were $22.7 billion for wafer fabrication materials and $20.4 billion for packaging materials. The wafer fabrication materials segment increased 6 percent year-over-year, while the packaging materials segment remained flat. However, if bonding wire were excluded from the packaging materials segment, the segment increased more than 4 percent last year. The continuing transition to copper-based bonding wire from gold is negatively impacting overall packaging materials revenues.

For the fifth consecutive year, Taiwan was the largest consumer of semiconductor materials due to its large foundry and advanced packaging base, totaling $9.8 billion. Japan claimed the second spot during the same time. Annual revenue growth was the strongest in the Taiwan market. The materials market in North America had the second largest increase at 5 percent, followed by China, South Korea and Europe. The materials markets in Japan and Rest of World were flat relative to 2013 levels. (The ROW region is defined as Singapore, Malaysia, Philippines, other areas of Southeast Asia and smaller global markets.)

Region 2013 2014 % Change
Taiwan

8.91

9.58

8%

Japan

7.17

7.19

0%

South Korea

6.87

7.03

2%

Rest of World

6.64

6.66

0%

China

5.66

5.83

3%

North America

4.76

4.98

5%

Europe

3.04

3.08

1%

Total

43.05

44.35

3%

Source: SEMI, April 2015
Note: Figures may not add due to rounding.

The Material Market Data Subscription (MMDS) from SEMI provides current revenue data along with seven years of historical data and a two-year forecast.

The Semiconductor Industry Association (SIA), representing U.S. leadership in semiconductor manufacturing and design, today announced worldwide sales of semiconductors reached $27.8 billion for the month of February 2015, an increase of 6.7 percent from February 2014 when sales were $26.0 billion. Global sales from February 2015 were 2.7 percent lower than the January 2015 total of $28.5 billion, reflecting seasonal trends. Regionally, sales in the Americas increased by 17.1 percent compared to last February to lead all regional markets. All monthly sales numbers are compiled by the World Semiconductor Trade Statistics (WSTS) organization and represent a three-month moving average.

“The global semiconductor industry maintained momentum in February, posting its 22nd straight month of year-to-year growth despite macroeconomic headwinds,” said John Neuffer, president and CEO, Semiconductor Industry Association. “Sales of DRAM and Analog products were particularly strong, notching double-digit growth over last February, and the Americas market achieved its largest year-to-year sales increase in 12 months.”

Regionally, year-to-year sales increased in the Americas (17.1 percent) and Asia Pacific (7.6 percent), but decreased in Europe (-2.0 percent) and Japan (-8.8 percent). Sales decreased compared to the previous month in Europe (-1.6 percent), Asia Pacific (-2.2 percent), Japan (-2.3 percent), and the Americas (-4.4 percent).

“While we are encouraged by the semiconductor market’s sustained growth over the last two years, a key driver of our industry’s continued success is free trade,” Neuffer continued. “A legislative initiative called Trade Promotion Authority (TPA) has paved the way for opening markets to American goods and services for decades, helping to give life to nearly every U.S. free trade agreement in existence, but it expired in 2007. With several important free trade agreements currently under negotiation, Congress should swiftly re-enact TPA.”

February 2015
Billions
Month-to-Month Sales
Market Last Month Current Month % Change
Americas 6.51 6.23 -4.4%
Europe 2.95 2.90 -1.6%
Japan 2.62 2.56 -2.3%
Asia Pacific 16.47 16.10 -2.2%
Total 28.55 27.79 -2.7%
Year-to-Year Sales
Market Last Year Current Month % Change
Americas 5.32 6.23 17.1%
Europe 2.96 2.90 -2.0%
Japan 2.81 2.56 -8.8%
Asia Pacific 14.96 16.10 7.6%
Total 26.04 27.79 6.7%
Three-Month-Moving Average Sales
Market Sep/Oct/Nov Dec/Jan/Feb % Change
Americas 6.53 6.23 -4.6%
Europe 3.19 2.90 -9.2%
Japan 2.93 2.56 -12.7%
Asia Pacific 17.12 16.10 -6.0%
Total 29.77 27.79 -6.7%

Packages are changing. Acoustic methods provide a way to image and analyze them.

BY TOM ADAMS, SONOSCAN, INC., Elk Grove Village, IL

By the year 2020, the design, dimensions and materials of various electronic component packages will have changed in varying degrees from their current forms. PEMs (plastic-encap-sulated microcircuits) will still be in production, but likely with shrinking sizes and better (or less expensive) encapsulants. Stacking of die connected by non-wire methods such as through-silicon vias (TSVs) will be in production. These and other package types, along with components such as ceramic chip capacitors, will need to be inspected for internal anomalies, typically by non-destructive acoustic micro imaging. This article takes a forward look at some of the challenges and changes that may take place in various packages and the possible advances in acoustic methods for imaging and analyzing them.

In electronic components, the business of acoustic micro imaging is to make visible and and analyze internal structural features. Acoustic micro imaging tools such as Sonoscan’s C-SAM series are used to image anomalies and defects, or to verify their absence. The defects are typically gaps – delaminations, voids, cracks, non-bonds and the like – but an acoustic micro imaging tool will also reveal surprises such as the out-of-place or missing die sometimes noted in counterfeit components.

New acoustic imaging methods

Today, the prevalent imaging mode for acoustic micro imaging tools is what is commonly called the Time Domain Amplitude Mode. The scanning transducer sends a pulse of VHF (5 to 100 MHz) or UHF (above 100 MHz) ultrasound into an x-y location. A few micro-seconds later, the transducer receives a number of echoes from the depth of interest. The amplitude of the highest-amplitude echo within a gate (time window) is used to assign a pixel value to that x-y location. The other echoes are ignored.

At the moment, there are about a dozen other imaging modes which collect data in different ways and which yield different information and images about a sample. One example: it is important in imaging IGBT modules to measure and map the thickness of the solder bonding the heat sink to the ceramic raft above. Irregular solder thickness often means that the raft is tilted or warped (and thus may restrict heat dissipation). The Time Difference mode will map the interface. This mode ignores echo amplitude altogether and uses the arrival time of the echoes to measure and map the thickness of the solder. Irregular solder thickness means that the raft is tilted or warped (and thus may restrict heat dissipation). Other acoustic imaging modes use other techniques to detect thickness variations.

The Frequency Domain mode produces multiple images of the target depth in a sample. Each image is made using echoes within a very narrow frequency range (e.g., 102.0-103.5 MHz). This mode is useful in samples having subtle anomalies or defects that may be hard to discern with, say, Amplitude Mode.

A new mode is typically developed when the user of an acoustic micro imaging tool expresses the need to push acoustic imaging beyond its current capabilities in order to solve a specific inspection problem. In some instances an existing mode that was previously developed for research purposes is found to be useful for emerging sample types. It is very likely that new acoustic imaging modes will developed as electronic components and assemblies continue to evolve.

A recently developed mode is the Echo Integral Mode. It gives a view similar to, but more informative than, the Amplitude Mode. While Amplitude Mode picks the highest single amplitude to assign a pixel value, The Echo Integral Mode uses the sum of the amplitude of all the echoes at a given x-y coordinate to determine the pixel color for that coordinate. This approach makes it easier to see subtle local differences in, say, the quality of a bond between two materials.

FIGURE 1 is the Thru-Scan mode image of a plastic BGA package. Thru-Scan pulses ultrasound into the top of the package and uses a sensor beneath the package to read the amplitude of the arriving ultrasound at each x-y location. Gap-type defects block ultrasound and thus appear in a Thru-Scan image as black acoustic shadows.

FIGURE 1. Thru-Scan image shows acoustic shadows of anomalies in a BGA package, but gives no depth information.

FIGURE 1. Thru-Scan image shows acoustic shadows of anomalies in a BGA package, but gives no depth information.

In Figure 1, the black features within the die at center are surely significant anomalies, but an engineer cannot tell from this Thru-Scan image what depth they lie at: are they in the die attach material or in the substrate below?

At left in FIGURE 2 is the Amplitude Mode image of the die area. This image is gated on (reads echoes only from) the die attach depth, and ignores echoes from other depths. The black dots are not features in the gated depth, but are the acoustic shadows of voids in the mold compound above the die. The die area itself is rather uniformly pale gray, with no features of note. The image at right used the Echo Integral Mode, also gated on the die attach material. Using the average amplitude of all the echoes at of millions of x-y coordinates gave a different result: there are significant differences in brightness. The large bright area marked by arrows is a gap-type defect in the die attach, and there are other, smaller defects of the same type. The defects imaged as black shadows by Thru-Scan are imaged here as near-white defects by the Echo Integral Mode. They are clearly in the die attach, and not in the substrate. The roughly spherical feature in the upper right of the Thru-Scan image, however, is the shadow of the void in the mold compound above the die.

FIGURE 2. Amplitude mode (left) shows no defects, but Echo Integral Mode (right) shows locations of defects in the die attach.

FIGURE 2. Amplitude mode (left) shows no defects, but Echo Integral Mode (right) shows locations of defects in the die attach.

Components will continue to shrink

Sonoscan’s laboratories have for some time been imaging PEMs that are only 200 microns thick and 3mm x 3mm in area. The die is typically less than 100 microns thick. In some ways, the small dimensions are an advantage in acoustic imaging: the plastic encapsulant scatters and absorbs ultrasound, so the less encapsulant the pulse and the resulting echo need to travel through, the better the resolution in the acoustic image. Such a component may be imaged with the very high frequency of 230 MHz, rather than the 15 MHz to 100 MHz of larger plastic packages. Higher frequency means better spatial resolution in the acoustic image.

One of the most commonly imaged non-PEM components is the ceramic chip capacitor, where the goal is to image delaminations and cracks that can lead to leakage between electrode layers. The very smallest ceramic chip capacitors currently being manufactured measure 0.010 inch by 0.005 inch. They can be imaged acoustically, but extremely small dimensions make imaging time-consuming.

Mid-end components

So named because they involve both front-end and back-end processes, mid-end components are typically assembled by mounting flip chips onto a wafer and then encapsulating the flip chips with plastic before dicing the wafer. They have been described as non-wired QFNs.

What has evolved is that some mid-end components can be imaged well enough to see details of the solder bump bonds, while others cannot. Sonoscan has developed transducers having an acoustic frequency that is low enough to get through the plastic encapsulant, and high enough to give good details about the bump bonds.

But many mid-end components have an encap- sulant that is only partly transparent to ultrasound. Gross features and defects will be visible, but not the details of the bump bonds, which will probably become even smaller in the future. The alternative is to use the Thru-Scan imaging mode. Any gap in between, such as a break in a solder bump, will block the arriving ultrasound and be visible as a black feature. These acoustic shadows contain no information about the depth of a feature, but the relatively simple design along with experience with a given mid-end component are helpful.

The evolution of package design may in time alleviate the encapsulant problem. The trend is toward more chip-on-wafer type designs, and toward ever-smaller dimensions. The encapsulants may perhaps become unnecessary; their departure would enhance acoustic inspection.

Stacked die

Individual components typically have industry standards that can be used to judge the risk posed by a void in the die attach material or a delamination along a lead finger. Stacked die have no industry standards; presumably each maker of stacked die uses their own guidelines to reduce field failures.

Die stacks can be imaged acoustically before encapsulation, and in the future some may be imaged after encapsulation, particularly if ultrasound-friendly encapsulants are used. In both situations, the same problem occurs: each pulse encountering a material interface is partly reflected and partly transmitted across the interface. Unencapsulated stacks are typically imaged during development in order to refine assembly processes. Even a four-die stack (that has at least eight interfaces) can generate so many echoes that it becomes very difficult to identify the echo being sent by the delamination of the adhesive on the top of die #3.

For unencapsulated stacks, this problem has largely been solved by software developed jointly by Sonoscan and the Technical University of Dresden. The software uses material properties and dimensions to create a virtual stack as much like the physical stack as possible, and works out the imaging techniques, which are then further refined on the physical stack. The goal is to identify the echoes that were returned from specific depths of interest – e.g., the interface between the bottom surface of die #6 and the adhesive beneath it. By repeatedly moving between the virtual sample and the physical sample, the imaging parameters are defined that will show the echoes at this depth.

Nearly all memory devices are stacked, and the die are wire-bonded to each other. But there are stacks have many different configurations; one common configuration puts a small memory chip on top of a larger processing chip.

It’s hard to tell where the architecture of die stacks may go from here. In some stacks, through-silicon vias (TSVs) will replace wires. Defects such as delamina- tions will be visible acoustically, but whether the TSVs will be visible acoustically is difficult to judge at this point. What manufacturers want to see is that each TSV is filled. Their diameters are already extremely small. Whether acoustic methods will be devised to make them nondestructively visible is not known yet.

A long-standing problem in imaging typical PEMs is that a delamination on the back side of the die paddle cannot be imaged when scanning the top side of the PEM. Before the PEM is surface-mounted, it can simply be flipped over and imaged from the back side. After mounting, only the top surface is available for scanning. The problem is that there are too many interfaces: the pulsed ultrasound must cross the top surface of the plastic, the plastic-to-die interface, the die-to-die attach interface, and the die attach-to-die paddle interface. This is essentially the same problem encountered in the imaging of stacked die. In theory, a delamination between the die paddle and the plastic below it can be located and imaged by the software developed for die stacks.

Package-on-package

Package-on-package assemblies, such as a package containing one or more memory die on top of a package containing one or more logic die, are beginning to appear in Sonoscan’s testing laboratories. These package designs have some advantages over the stacking of die; for example, if one of the two packages is found to be defective before assembly, it can be replaced, while the logic package is retained. It seems likely that the popularity of these assemblies will increase in the next few years.

After the two packages are bonded together, the chief structural reliability concern is the adhesive between the two packages. This is where gap-type defects, primarily voids, may be found. If present, voids put stress on the solder joints for the BGA balls.

How acoustic imaging is performed depends on the structure of the assembly. Normal reflection-mode pulse-echo imaging can sometimes be used, but the assembly is likely to have numerous material interfaces that could limit the effectiveness of this method. Because internal structural defects in this assembly are largely limited to voids at a specific known depth, it often makes more sense to use the Thru-Scan mode to reveal the voids.

Interposers

The term “interposer” is used rather loosely to describe a redistribution layer between a top die and a lower die or printed circuit board. chip and the solder balls that make connection with a substrate. In terms of acoustic imaging, interposers behave much like flip chips, in that the depth of interest is between two structures.

The common defects are delaminations, signif- icant because they are capable of attracting contaminants (and thus causing corrosion) and of expanding through thermal cycling. The growth of chips having advanced processing capabilities will likely make the acoustic imaging of interposers more frequent.

Summary

The advantage of acoustic micro imaging tools is their ability to image nondestructively gap-type anomalies and certain other anomalies (tilting, warping) in electronic materials. In recent years, the original Amplitude Mode has been joined by roughly a dozen other modes that push imaging capabilities into new areas.

It can be expected that electronic components will continue to add their own capabilities and to reduce their physical dimensions. Some components will become more difficult to image; others, particularly those that become thinner or that use acoustically friendly materials, may permit the use of higher frequencies to image smaller features. Since there is no good non-destructive substitute for acoustic modes, engineers who demand reliability may want to apply acoustic micro imaging to new device configurations and keep track of new acoustic imaging modes.

Machine learning based advanced analytics for anomaly detection offers powerful techniques that can be used to achieve breakthroughs in yield and field defect rates.

BY ANIL GANDHI, PH. D. and JOY GANDHI, Qualicent Analytics, Inc., Santa Clara, CA

In the last few decades, the volume of data collected in semiconductor manufacturing has grown steadily. Today, with the rapid rise in the number of sensors in the fab, the industry is facing a huge torrent of data that presents major challenges for analysis. Data by itself isn’t useful; for it to be useful it must be converted into actionable information to drive improvements in factory performance and product quality. At the same time, product and process complexities have grown exponentially requiring new ways to analyze huge datasets with thousands of variables to discover patterns that are otherwise undetected by conventional means.

In other industries such as retail, finance, telecom and healthcare where big data analytics is becoming routine, there is widespread evidence of huge dollar savings from application of these techniques. These advanced analytics techniques have evolved through computer science to provide more powerful computing that complements conventional statistics. These techniques are revolutionizing the way we solve process and product problems in the semiconductor supply chain and throughout the product lifecycle. In this paper, we provide an overview of the application of these advanced analytics techniques towards solving yield issues and preventing field failures in semiconductors and electronics.

Advanced data analytics boosts prior methods in achieving breakthrough yields, zero defect and optimizing product and process performance. The techniques can be used as early as product development and all the way through high volume manufacturing. It provides a cost effective observational supplement to expensive DOEs. The techniques include machine learning algorithms that can handle hundreds to thousands of variables in big or small datasets. This capability is indispensable at advanced nodes with complex fab process technologies and product functionalities where defects become intractable.

Modeling target parameters

Machine learning based models provide a predictive model of targets such as yield and field defect rates as functions of process, PCM, sort or final test variables as predictors. In the development phase, the challenge is to eliminate major systematic defect mechanisms and optimize new processes or products to ensure high yields during production ramp. Machine learning algorithms reduce the number of variables from hundreds to thousands to the few key variables of importance; this reduction is just sufficient to allow nonlinear models to be built without over fitting. Using the model, a set of rules involving these key variables are derived. These rules provide the best operating conditions to achieve the target yield or defect rate. FIGURE 1 shows an example non-linear predictive model.

FIGURE 1. Predictive model example.

FIGURE 1. Predictive model example.

FIGURE 2 is another example of rules extracted from a model, showing that when all conditions of the rule are valid across the three predictors simultaneously, then this results in lower yield. Discovering this signal with standard regression techniques failed because of the influence of a large number of manufacturing variables. Each of these large number of variables has a small and negligible influence individually, however they all combine to create noise and thus masking the signal. Standard regression techniques, available in commercial software, therefore are unable to detect the signal in these instances and therefore are not of practical use for process control. So how do we discover the rules such as the ones shown in Fig. 2?

FIGURE 2. Individual parameters M, Q and T do not exert influence while collectively they create conditions that destroy yield. Machine learning methods help discover these conditions.

FIGURE 2. Individual parameters M, Q and T do not exert influence while collectively they create conditions that destroy yield. Machine learning methods help discover these conditions.

Rules discovery

Conventionally, a parametric hypothesis is made based on prior knowledge (process domain knowledge) and then the hypothesis is tested. For example to improve an etest metric such as threshold voltage one could start with a hypothesis that connects this backend parameter with RF power on an etch process in the frontend. However many times it is impossible to make a hypothesis based on domain knowledge because of the complexity of the processes and the variety of possible interactions, especially across several steps. So alternatively, a generalized model with cross terms is proposed and then significant coefficients are picked and the rest are discarded. This works if the number of variables is small but fails with large number of variables. With 1100 variables (a very conservative number for fabs) there are 221 million possible 3-way interactions, and 60 million 2-way cross terms on top of the linear coefficients!

Fitting these coefficients would require a number of samples or records that are clearly not available in the fab. Recognizing that most of the variables and interactions have no bearing on yield, we must then reduce the feature set size (i.e. number of predictors) within a healthy manageable limit (< 15) before we apply any model to it; several machine learning techniques based on derivatives of decision trees are available for feature reduction. Once the feature set is reduced then exact models are developed using a palette of techniques such as those based on advanced variants of piece-wise regression.

In essence, what we have described above is discovery of the hypothesis, while more traditionally one starts with a hypothesis…to be tested. The example in Fig. 2 had 1100 variables most of which had no influence, six of them have measurable influence (three of them are shown), all of these were hard to detect because of dimensional noise.

The above type of technique is part of a group of methods classified as supervised learning. In this type of machine learning, one defines the predictors and target variables and the technique finds the complex relationships or rules governing how the predictors influence the target. In the next example we include the use of unsupervised learning which allows us to discover clusters that reveal patterns and relationships between predictors which can then be connected to the target variables.

FIGURE 3. Solar manufacturing line conveyor, sampled at four points for colorimetry.

FIGURE 3. Solar manufacturing line conveyor, sampled at four points for colorimetry.

FIGURE 3 shows a solar manufacturing line with four panels moving on a conveyor. The end measure of interest that needed improvement was cell efficiency. Measurements are made at the anneal step for each panel as shown at locations 1, 2, 3, 4 in FIGURE 4. The ratio between measurement sites with respect to a key metric called Colorimetry, was discovered to important; the way this was discovered was by employing clustering algorithms, which are part unsupervised learning. This ratio was found in subsequent supervised model to influence PV solar efficiency as part of a 3-way interaction.

FIGURE 4: The ratios between 1, 2, 3, 4 colorimetry were found to have clusters and the clusters corresponded to date separation.

FIGURE 4: The ratios between 1, 2, 3, 4 colorimetry were found to have clusters and the clusters corresponded to date separation.

In this case, without the use of unsupervised machine learning methods, it would have been impossible to identify the ratio between two predictors as an important variable affecting the target because this relationship was not known and therefore no hypothesis could be made for testing it among the large number of metrics and associated statistics that were gathered. Further investigation led to DATE as the determining variable for the clusters.

Ultimately the goal was to create a model for cell efficiency. Feature reduction described earlier is performed followed by advanced piecewise regression and the resulting model based on 10 fold cross validation (build model on 80% of data and test against rest 20% and do this 10 times with a different random sample each time) results in a complex non-linear model with key element that includes a 3 way interaction as shown in FIGURE 5, where the dark green area represents the condition that drops the median efficiency by 30% from best case levels. This condition Colorimetry < 81, Date > X and N2 < 23.5 creates the exclusion zone that should be avoided to improve cell efficiency.

FIGURE 5. N2 (x-axis)  X represent the “bad” condition (dark green) where the median cell efficiency drops by 30% from best case levels.

FIGURE 5. N2 (x-axis) < 23.5, colorimetry < 81 and Date > X represent the “bad” condition (dark green) where the median cell efficiency drops by 30% from best case levels.

Advanced anomaly detection for zero defect

Throughout the production phase, process control and maverick part elimination are key to preventing failures in the field at early life and the rest of the device operating life. This is particularly crucial for automotive, medical device and aerospace applications where field failures can result in loss of life or injury and associated liability costs.

The challenge in screening potential field failures is that these are typically marginal parts that pass individual parameter specifications. With increased complexity and hundreds to thousands of variables, monitoring a handful of parameters individually is clearly insufficient. We present a novel machine learning-based approach that uses a composite parameter that includes the key variables of importance.

Conventional single parameter maverick part elimination relies on robust statistics for single parameter distributions. Each parameter control chart detects and eliminates the outliers but may eliminate good parts as well. Single parameter control charts are found to have high false alarm rates resulting in significant scrap rates of good material.

In this novel machine learning based method, the composite parameter uses a distance measure from the centroid in multidimensional space. Just as in single parameter SPC charts, data points that are farthest from the distribution that cross the limits are maverick and are eliminated. In that sense the implementation of this method is very similar to the conventional SPC charts, while the algorithm complexity is hidden from the user.

FIGURE 6. Comparison of single parameter control chart for the top parameter in the model and Composite Distance Control Chart. The composite distance method detected almost all field failures without sacrificing good parts whereas the top parameter alone is grossly insufficient.

FIGURE 6. Comparison of single parameter control chart for the top parameter in the model and Composite Distance Control Chart. The composite distance method detected almost all field failures without sacrificing good parts whereas the top parameter alone is grossly insufficient.

See FIGURE 6 for a comparison of the single parameter control chart of the top variable of importance versus the composite distance chart. TABLES 1 and 2 show the confusion matrix for these charts. With the single parameter approach, the topmost contributing parameter is able to detect 1 out of 7 field failures. We call this accuracy. However only one out of 21 declared fails is actually a fail – we call this purity of the fail class. Potentially more failures can be detected by lowering the limit somewhat, in the top chart however in that case the purity of the fail class which was already bad now balloons rapidly to unacceptable levels.

TABLE 1. Top Parameter

TABLE 1. Top Parameter

TABLE 2. Composite Parameter

TABLE 2. Composite Parameter

In the composite distance method, on the other hand 6 out of 7 fails are detected – good accuracy. The cost of this detection is also low (high purity) because 6 of 10 declared fails are actually field failures – which is a lot better than 1 out of 21 in the incumbent case and significantly better if the limit in the single top parameter chart was lowered even a little.

We emphasize 2 key advantages of this novel anomaly detection technique. First, the multi-variate nature enables detection of marginal parts that not only pass the specification limits for individual parameters but also are within distribution for all of the parameters taken individually. The composite distance successfully identifies marginal parts that fail in the field. Second, this method significantly reduces the false alarm risk compared to single parameter techniques. This leads to reduction of the cost associated with the “producer’s risk” or beta risk of rejecting good units. In short, better detection of maverick material at lower cost.

Summary and conclusion

Machine learning based advanced analytics for anomaly detection offers powerful techniques that can be used to achieve breakthroughs in yield and field defect rates. These techniques are able to crunch large data sets and hundreds to thousands of variables, overcoming a major limitation with conventional techniques. The two key methods that were explored in this paper key are as follows:

Discovery – This set of techniques provides a predictive model that contains the key variables of importance affecting target metrics such as yield or field defect levels. Rules discovery (a supervised learning technique) among many other methods that we employ, discovers rules that provide the best operating or process conditions to achieve the targets. Or alternatively it identifies exclusion zones that should be avoided to prevent loss of yield and performance. Discovery techniques can be used during early production phase when there is greatest need to eliminate major yield or defect mechanisms to protect the high volume ramp. And of course the techniques are equally applicable in high volume production.

Anomaly Detection – This method based on the unsupervised learning class of techniques, is an effective tool for maverick part elimination. The composite distance process control based on Quali- cent’s proprietary distance analysis method provides a cost effective way for preventing field failures. At leading semiconductor and electronics manufacturers, the method has predicted actual automotive field failures that occurred in top carmakers.

Supplier Hub answers the needs of a changing semiconductor industry. 

BY LUC VAN DEN HOVE, imec, Leuven, Belgium

Supplier HubOur semiconductor industry is a cyclical business, with regular ups and downs. But we have always successfully rebounded, with new technologies that have brought on the next generation of electronic products. Now however, the industry stands at an inflection point. Some of the challenges to introduce next generation technologies are larger than ever before. Overcoming this point will require, in our opinion, a tighter collaboration than ever. To accommodate that collaboration, we have set up a new Supplier Hub, a neutral platform where researchers, IC producers, and suppliers work on solutions for technical challenges. This collaboration will allow the industry to overcome the inflection point and to move on to the next cycle of success, driven by the many exciting application domains that appear on the horizon.

Call for a new collaboration model

The formulas for the industry’s success have changed. Device structures are pushing the limits of physics, making it challenging to continue progressing according to Moore’s Law. Intricate manufacturing requirements make process control ever more difficult. Also chip design is more complex than ever before, requiring more scrutiny, analysis and testing before manufacturing can even begin. And the cost of manufacturing equipment and setting up a fab has risen exponentially, shutting out many smaller companies and forcing equipment and material suppliers to merge.

In that context, more and more innovation is coming from the supplier community, both from equipment and material suppliers. But as processes are approaching some fundamental limits, such as material limits, chemical, physical limits, it is also for suppliers becoming more difficult to operate and develop next-generation process steps in an isolated way. An earlier and stronger interaction among suppliers is needed.

All this makes a central and neutral platform more important than ever. That insight and the requests we got from partners set imec on the path to organizing a supplier hub. A hub that is structured as a neutral, open innovation R&D platform, a platform for which we make a substantial part of our 300mm cleanroom floor space available, even extending our facilities. It is a platform where suppliers and manufacturers collaborate side-to- side with the researchers developing next-generation technology nodes.

Organizing the supplier hub is a logical evolution in the way we have always set up collaborations with and between companies that are involved in semiconductor manufacturing. Collaborations that have proven very successful in the previous decade and that have resulted in a number of key innovations.

Supplier Hub off to a promising start

Today, both in logic and in memory, we are developing solutions to enable 7nm and 5nm technology nodes. These will involve new materials, new transistor architectures, and ever shrinking dimensions of structures and layers. At imec, the bulk of scaling efforts like these used to be done in collaborative programs involving IDMs and foundries, but also the fabless and fablite companies. All of these programs were strongly supported by our partnerships with the supplier community.

But today, to work out the various innovations in process steps needed for future nodes, we simply need this stronger and more strategic engagement from the supplier community, involving experimenting on the latest tools, even if they are still under development. And vice-versa, the tool and material suppliers can no longer only develop tools based on specs documents. To fabricate their products successfully and on time, they need to develop and test in a real process flow, and be involved in the development of new device concepts, to be able to fabricate tools and design process steps that match the requirements of the new devices.

A case in point: it is no longer possible now to develop and asses the latest generation of advanced litho without matching materials and etch processes. And reversely, the other tool suppliers need the result of the latest litho developments. So today, all process steps have to be optimized concurrently with other process steps, integrating material innovations at the same time. And this is absolutely necessary for success.

So that’s where the Supplier Hub enters.

In 2013, imec announced an extended collaboration with ASML, involving the set up an advanced patterning center, which will grow to 100 engineers. In 2014, the new center was started as the cornerstone of the supplier hub. Mid 2014, Lam Research agreed to partake in the hub. And since then a growing number of suppliers has been joining, among them the big names in the industry. Some of more recent collaborations that we announced e.g. were Hitachi (CD-SEM metrology equipment) and SCREEN Semiconductor Solutions (cleaning and surface preparation tools).

End of 2014, ASML started installing its latest EUV-tool, the NXE:3300. In the meantime, we have initiated building a new cleanroom next to our existing 300mm infrastructure. The extra floor space will be needed to accommodate all the additional equipment that will come in in the frame of the tighter collaboration among suppliers. Finally, during our October 2014 Internal Partner Conference, we organized a first Supplier Collaboration Forum where the suppliers discussed and evaluated their projects with all partners, representing a large share of the semiconductor community.

We have also been expanding the supplier hub concept through a deeper involvement of material suppliers. These will prove a cornerstone of the hub, as many advances we need for scaling to the next nodes will be based on material innovations.

Enabling the Internet-Of-Everything

I hold great optimism for the industry. The last years, the success of mobile devices has fueled the demand for semiconductor-based products. These mobile applications will continue to stimulate data consumption, going from 4G to 5G as consumers clamor for greater data availability, immediacy, and access. Beyond the traditional computing and communications applications loom new markets, collectively called the ‘Internet of Everything.’

In addition, nanoelectronics will enable disruptive innovations in healthcare to monitor, measure, analyze, predict and prevent illnesses. Wearable devices have already proven themselves in encouraging healthier lifestyles. The industry’s challenge is now to ensure that the data delivered via personal devices meet medical quality standards. In that frame, our R&D efforts will continue to focus on ultra-low-power multi-sensor platforms.

While there are many facets to the inflection point puzzle, the answers of the industry begin to take shape. The cost of finding new solutions will keep on rising. Individual companies carry ever larger risks if their choices prove wrong. But through closer collabo- ration, companies can share that risk while developing solutions, exploring and creating new technologies, shorten times to market, and be ready to bring a new generation of products to a waiting world. The industry may indeed stand at an inflection point, but the future is bright. Innovation cannot be stifled. And collaboration remains the consensus of an industry focused on the next new thing. Today, IC does not just stand for Integrated Circuit, it indeed calls for Innovation and Collaboration.

With an impressive 20 percent growth in MEMS revenue compared to 2013, and sales revenues of more than $1.2B, Robert Bosch GmbH is the clear #1.

illus_top30mems_march2015

From Yole Développement’s yearly analysis of “TOP 100 MEMS Players,” analysts have released the “2014 TOP 20 MEMS Players Ranking.” This ranking shows the clear emergence of what could be a future “MEMS titan”: Robert Bosch (Bosch). Driven by MEMS for smartphone sales – including pressure sensors -, Bosch’s MEMS revenue increased by 20 percent in 2014, and totaling $1.2B. The gap between Bosch and STMicroelectronics now stands at more than $400M

“The top five remains unchanged from 2013, but Bosch now accounts for one-third of the $3.8B MEMS revenue shared by the top five MEMS companies. Together, these five companies account for around one- third of the total MEMS business,” details Jean-Christophe Eloy, President & CEO, Yole Développement (Yole). “It’s also interesting to see that among the top thirty players, almost every one increased its revenue in 2014,” he adds.

In other noteworthy news, Texas Instruments’ sales saw a slight increase thanks to its DLP projection business. RF companies also enjoyed impressive growth, with a 23 percent increase for Avago Technologies (close to $400M) and a 141 percent increase for Qorvo (formerly TriQuint), to $350M.

Meanwhile, the inertial market keeps growing. This growth is beneficial to InvenSense, which continues its rise with a 32 percent increase in 2014, up to $329M revenue. Accelerometers, gyroscopes and magnetometers are not the only devices contributing to MEMS companies’ growth. Pressure sensors also made a nice contribution, especially in automotive and consumer sectors. Specifically, Freescale Semiconductor saw a 33 percent increase in pressure revenue, driven by the Tire Pressure Monitoring Systems (TPMS) business for automotive. On the down side, ink jet head companies still face hard times, with Hewlett-Packard (HP) and Canon both seeing revenues decrease. However, new markets are being targeted. Though thus far limited to consumer printers, MEMS technology is set to expand into the office and industrial markets as a substitute for laser printing technology (office) and inkjet piezo machining technology (for industrial & graphics).

“What we see is an industry that will generally evolve in four stages over the next 25 years. This is true for both CMOS Image Sensors and MEMS,” explains Dr Eric Mounier, Senior Technology & Market Analyst, MEMS devices & Technologies at Yole. He explains: “The “opening stage” generally begins when the top three companies hold no more than 10 – 30 percent market share. Later on, the industry enters the “scale stage” through consolidation, when the top three increases its collective market share to 45 percent.”

According to Yole, the “More than Moore” market research and strategy consulting company, MEMS industry has now entered the “Expansion Stage.”

“Key players are expanding, and we’re starting to see some companies surpassing others (i.e. Bosch’s rise to the top). If we follow this model, the next step will be the “Balance & Alliance” stage, characterized by the top three holding up to 90 percent of market share”, comments Dr Mounier.

Among the 10 or so MEMS titans currently sharing most of the MEMS markets, Yole’s analysts have separated them into two categories:

  • “Titans with Momentum” and “Struggling Titans”. In the first category we include Bosch, InvenSense, Avago Technologies and Qorvo. Bosch’s case is particularly noteworthy, since it’s currently the only MEMS company with dual markets (automotive and consumer) and the right R&D/production infrastructure.
  • On the “Struggling Titans” side, Yole identifies STMicroelectronics, HP, Texas Instruments, Canon, Knowles, Denso and Panasonic. These companies are currently struggling to find an efficient growth engine.

 

Without question, both Bosch and InvenSense are growing, while others like STMicroelectronics and Knowles are suffering a slow-down or MEMS sales decrease.

Another interesting fact about Yole’s 2014 TOP MEMS Ranking is that there are no new entrants (and thus no exits).

More market figures and analysis on MEMS, the Internet of Things (IoT) and wearables can be found in Yole’s 2014 IoT report (Technologies & Sensors for Internet of Things: Business & Market Trends, June 2014), and the upcoming “Sensors for Wearables and Mobile” report.

Also, Yole is currently preparing the 2015 release of its “MEMS Industry Status.” This will be issued in April and will delve deeper into MEMS markets, strategies and players analyses.

GLOBALFOUNDRIES, a provider of advanced semiconductor manufacturing technology, and NXP Semiconductor N.V, a semiconductor company for secure connection solutions, today announced that they have jointly developed a next-generation embedded non-volatile memory (eNVM), which has resulted in production of 300mm prototype wafers on GLOBALFOUNDRIES’ 40-nanometer (nm) process technology platform. GLOBALFOUNDRIES is the first wafer foundry to develop and qualify 40nm eNVM low-power process technology.  Volume production is expected in 2016 at its Singapore facility.

The successful execution of joint development and technology production milestones will enable faster time to market of high density on-chip eNVM for innovative applications in a variety of products including identification, near-field-communication, healthcare, and microcontrollers. NXP will leverage GLOBALFOUNDRIES’ leading-edge semiconductor manufacturing capability to apply the overall technology to 40nm eNVM that will bring competitive value to end customers.

“We are pleased to see the co-developed 40nm-LP eNVM technology is ready for production in GLOBALFOUNDRIES facility,” said Dr. Hai Wang, executive vice president of Technology and Operations at NXP Semiconductor. “GLOBALFOUNDRIES is the first foundry that developed this process technology specifically targeting markets that require embedded non-volatile memory products. The successful release to production will enable NXP to further strengthen our market leadership in offering advanced solutions for secure and near field communication market segments.”

“We have a long-standing and close collaboration with NXP across other technology nodes. The successful joint development of eNVM gives us a boost in our confidence in the marketplace as we advance our 40nm technology leadership,” said KC Ang, SVP and GM for GLOBALFOUNDRIES Singapore. “We look forward to having additional eNVM technology offerings for future market opportunities.”

GLOBALFOUNDRIES’ manufacturing site in Singapore is certified by the German Federal Office for Information Security (BSI) for secure IC products manufacturing.  In 2012, the foundry received Common Criteria ISO 15408-EAL 6 certification and successfully received renewal in 2014. The company is also a two-time winner of NXP annual supplier award for best foundry services.

North America-based manufacturers of semiconductor equipment posted $1.31 billion in orders worldwide in February 2015 (three-month average basis) and a book-to-bill ratio of 1.02, according to the February EMDS Book-to-Bill Report published today by SEMI.   A book-to-bill of 1.02 means that $102 worth of orders were received for every $100 of product billed for the month.

The three-month average of worldwide bookings in February 2015 was $1.31 billion. The bookings figure is 1.3 percent lower than the final January 2015 level of $1.33 billion, and is 1.0 percent higher than the February 2014 order level of $1.30 billion.

The three-month average of worldwide billings in February 2015 was $1.28 billion. The billings figure is 0.2 percent lower than the final January 2015 level of $1.28 billion, and is 0.9 percent lower than the February 2014 billings level of $1.29 billion.

“Year-to-date bookings and billings for North American semiconductor equipment are higher than last year for the same time period,” said SEMI president and CEO Denny McGuirk. “The year is off to a good start, with growth in bookings from the back-end sector.”

The SEMI book-to-bill is a ratio of three-month moving averages of worldwide bookings and billings for North American-based semiconductor equipment manufacturers. Billings and bookings figures are in millions of U.S. dollars.

Billings
(3-mo. avg)

Bookings
(3-mo. avg)

Book-to-Bill

September 2014 

$1,256.5

$1,186.2

0.94

October 2014 

$1,184.2

$1,102.3

0.93

November 2014 

$1,189.4

$1,216.8

1.02

December 2014 

$1,395.9

$1,381.5

0.99

January 2015 (final)

$1,279.1

$1,325.6

1.04

February 2015 (prelim)

$1,277.1

$1,308.1

1.02

Source: SEMI, March 2015

Smaller and more powerful medical systems are driving up sales of ICs, sensors, and other devices for the medical semiconductor market.  IC Insights believes medical semiconductor sales growth will strengthen this year and next before sliding back in the next expected economic slowdown in 2017 (Figure 1). Between 2013 and 2018, worldwide medical semiconductor sales are projected to rise by a compound annual growth rate (CAGR) of 12.3 percent, reaching $8.2 billion in the final year of the forecast.  In the 2008-2013 period (which included the 2009 downturn), medical semiconductor sales grew by a CAGR of 6.9 percent.

medical semiconductor sales

The IC portion of the medical semiconductor business is expected to rise by a CAGR of 10.7 percent to $6.6 billion in 2018 while the marketshare for optoelectronics, sensors/actuators, and discretes (O-S-D) is forecast to grow by an annual rate of 20.3 percent to $1.6 billion that year (primarily due to strong demand for solid-state sensors and optical imaging devices).

ICs and other semiconductor technologies continue to play key roles in reshaping and redefining medical systems. With more medical imaging systems being digitized and healthcare equipment running under computer control, IC-driven advancements are happening almost as quickly as they are in mobile phones, and many consumer electronics. Government certification can slow some system introductions. The scaling of IC feature sizes, system-on-chip (SoC) designs, improvements in sensors, and powerful analog frontend (AFE) data converters are reducing the size of medical diagnostic equipment and the cost of using them.

Developments of new medical systems for imaging and diagnostics, treatment, and surgery are heading in two different directions as equipment makers respond to growing pressures for lower costs and increased availability of healthcare worldwide. In one direction, new medical equipment is becoming smaller and less expensive so that systems can be used in the rooms of hospital patients, clinics, and doctor offices. These systems cost one-quarter to one-tenth the price of large diagnostic equipment—such as traditional MRI and CT scanners, which can cost $1 million and are normally installed in medical-imaging centers or in dedicated hospital examination rooms.

Also, lower-cost wearable medical systems and fitness monitors, which can wirelessly transmit vital signs and other readings to doctors or be used as “activity trackers” for health-conscious individuals, are seeing tremendous growth. In some cases, medical and fitness-monitoring applications can be performed directly by smartphones using their embedded sensors and downloaded software apps. However, medically certified mobile healthcare devices are usually required in most countries for monitoring patients and the elderly in their homes. The information is sent to doctors via wireless connections to cellphones or the Internet.

The second major trend in medical equipment is the development of more powerful and integrated systems, which are expensive but promise to lower healthcare costs by detecting cancer and diseases sooner and supporting less invasive surgery for quick recovery times and shorter stays in hospitals. Computer-assisted surgery systems, surgical robots, and operating-room automation are among new technologies being pursued by some hospitals in developed markets.

High growth in lower-cost systems along with the rising price tag of more sophisticated hospital equipment in developed country markets is expected to increase total medical electronics systems sales by a CAGR of 8.2 percent between 2013 and 2018, to $70.1 billion in the final year of the forecast.

Additional details on the IC market for medical and wearable electronic is included in the 2015 edition of IC Insights’ IC Market Drivers—A Study of Emerging and Major End-Use Applications Fueling Demand for Integrated Circuits.