Tag Archives: letter-leds-top

April 2015 marks the 50th anniversary of one of the business world’’s most profound drivers, now commonly referred to as Moore’s Law.  In April 1965, Gordon Moore, later co-founder of Intel, observed that the number of transistors per square inch on integrated circuits would continue to double every year.  This “observation” has set the exponential tempo for five decades of innovation and investment resulting in today’s $336 billion USD integrated circuits industry enabled by the $82 billion USD semiconductor equipment and materials industry (SEMI and SIA 2014 annual totals).

SEMI, the global industry association serving the nano- and micro-electronic manufacturing supply chains, today recognizes the enabling contributions made by the over 1,900 SEMI Member companies in developing semiconductor equipment and materials that produce over 219 billion integrated circuit devices and 766 billion semiconductor units per year (WSTS, 2014).

50 years of Moore’’s Law has led to one of the most technically sophisticated, constantly evolving manufacturing industries operating today.  Every day, integrated circuit (IC) production now does what was unthinkable 50 years ago.  SEMI Member companies now routinely produce materials such as process gases, for example, to levels of 99.994 percent quality for bulk Silane (SiH4) in compliance with the SEMI C3.55 Standard.  Semiconductor equipment manufacturers develop the hundreds of processing machines necessary for each IC factory (fab) that are at work all day, every day, processing more than 100 silicon wafers per hour with fully automated delivery and control – all with standardized interoperability. SEMI Member companies provide the equipment to inspect wafer process results automatically, and find and identify defects at sizes only fractions of the 14nm circuit line elements in today’s chips, ensuring process integrity throughout the manufacturing process.

“”It was SEMI Member companies who enabled Moore’’s Law’’s incredible exponential growth over the last 50 years,”” said Denny McGuirk, president and CEO of SEMI.  “”Whereas hundreds of transistors on an IC was noteworthy in the 1960s, today over 1.3 billion transistors are on a single IC.  SEMI Member companies provide the capital equipment and materials for today’s mega-fabs, with each one processing hundreds or thousands of ICs on each wafer with more than 100,000 wafers processed per month.””

To celebrate SEMI Member companies’ contribution to the 50 years of Moore’s Law, SEMI has produced a series of Infographics that show the progression of the industry.

1971

2015

Price per chip

$351

$393

Price per 1,000 transistors

$150

$0.0003

Number of transistors per chip

2,300

1,300,000,000

Minimum feature size on chip

10,000nm

14nm

From SEMI infographic “Why Moore Matters”: www.semi.org/node/55026

Common pulsed measurement challenges are defined.

In case you missed it, Part 1 is available here.

BY DAVID WYBAN, Keithley Instruments, a Tektronix Company, Solon, Ohio

For SMU and PMU users, an issue that sometimes arises when making transient pulse measurements is the presence of “humps” (FIGURE 1) in the captured current waveform at the rising and falling edges of the voltage pulse. These humps are caused by capacitances in the system originating from the cabling, the test fixture, the instrument, and even the device itself. When the voltage being output is changed, the stray capacitances in the system must be either charged or discharged and the charge current for this either flows out of or back into the instrument. SMUs and PMUs measure current at the instrument, not at the DUT, so the instrument measures these current flows while a scope probe at the device does not.

FIGURE 1. Humps in the captured current (red) waveform at the rising and falling edges of the voltage pulse.

FIGURE 1. Humps in the captured current (red) waveform at the rising and falling edges of the voltage pulse.

This phenomenon is seen most often when the change in voltage is large or happens rapidly and the current through the device itself is low. The higher the voltage of the pulse or the faster the rising and falling edges, the larger the current humps will be. For SMUs with rise times in the tens of microseconds, these humps are usually only seen when the voltages are hundreds or even thousands of volts and the current through the device is only tens of microamps or less. However, for PMUs where the rise times are often less than 1μs, these humps can become noticeable on pulses of only a couple of volts, even when the current through the device is as high as several milliamps.
Although these humps in the current waveform may seem like a big problem, they are easy to eliminate. The humps are the result of the current being measured at the high side of the device where the voltage is changing. Adding a second SMU or PMU at the low side of the device to measure current will make these humps go away because at the low side of the device the voltage does not change so there’s no charge or discharge currents flowing and the current measured at the instrument will match the current at the device. If this isn’t an option, this problem can be minimized by reducing the stray capacitance in the system by reducing the length of the cables. Shorter cables equal less stray capacitance, which reduces the size of the humps in the current waveform.

The next common pulse measurement issue is test lead resistance. As test currents get higher, the impact of this resistance becomes increasingly significant. FIGURE 2 shows an SMU that is performing a pulse I-V measurement at 2V across a 50mΩ load. Based on Ohm’s Law, one might expect to measure a current through the device of 40A, but when the test is actually performed, the level of current measured is only 20A. That “missing” 20A is the result of test lead resis- tance. In fact, we were not pulsing 2V into 50mΩ but into 100mΩ instead, with 25mΩper test lead. With 50mΩ of lead resistance, half of the output voltage sourced was dropped in the test leads and only half of it ever reached the device.

FIGURE 2. Impact of test lead resistance.

FIGURE 2. Impact of test lead resistance.

To characterize the device correctly, it’s essential to know not only the current through the device but the actual voltage at the device. On SMUs this is done by using remote voltage sensing. Using a second set of test leads allows the instrument to sense the voltage directly at the device; because almost no current flows through these leads, the voltage fed back to the instrument will match the voltage at the device. Also, because these leads feed the voltage at the device directly back into the SMU’s feedback loop, the SMU can compensate for the voltage drop across the test leads by outputting a higher voltage at its output terminals.

Although SMUs can use remote sensing to compensate for voltage drops in the test leads, there is a limit to how much drop it can compensate for. For most SMUs, this maximum drop is about 3V/lead. If the voltage drop per lead reaches or exceeds this limit, strange things can start happening. The first thing is that the rise and fall times of the voltage pulse slow down, significantly increasing the time required to make a settled measurement. Given enough time for the pulse to settle, the voltage measurements may come back as the expected value, but the measured current will be lower than expected because the SMU is actually sourcing a lower voltage at the DUT than the level that it is programmed to source.

If you exceed the source-sense lead drop while sourcing current, a slightly different set of strange behaviors may occur. The current measurement will come back as the expected value and will be correct because current is measured internally and this measurement is not affected by lead drop, but the voltage reading will be higher than expected. In transient pulse measurements, you may even see the point at which the source-sense lead drop limit was exceeded as the measured voltage suddenly starts increasing again after it appeared to be settling.

These strange behaviors can be difficult to detect in the measured data if you do not know what voltage to expect from your device. Therefore, inspecting your pulse waveforms fully when validating your test system is essential.

Minimizing test lead resistance is essential to ensuring quality pulse measurements. There are two ways to do this:

Minimize the length of the test leads. Wire resistance increases at a rate that’s directly proportional to the length of the wire. Doubling the wire’s length doubles the resis- tance. Keeping leads lengths no greater than 3 meters is highly recommended for high current pulse applications.

Use wire of the appropriate diameter or gauge for the current being delivered. The resistance of a wire is also directly proportional to the cross sectional area of the wire. Increasing the diameter, or reducing the gauge, of the wire increases this area and reduces the resistance. For pulse applications up to 50A, a wire gauge of no greater than 12 AWG is recommended; for applications up to 100A, it’s best to use no greater than 10 gauge.

Excessive test lead inductance is another common issue. In DC measurements, test lead inductance is rarely considered because it has little effect on the measurements. However, in pulse measurements, lead inductance has a huge effect and can play havoc with a system’s ability to take quality measurements.

FIGURE 3. Humps in the voltage waveform of transient pulse measurements due to test system inductance.

FIGURE 3. Humps in the voltage waveform of transient pulse measurements due to test system inductance.

Humps in the voltage waveform of transient pulse measurements (FIGURE 3) are a common problem when generating current pulses. Just as with humps in the current waveforms, these humps can be seen in the data from the instrument but are nowhere to be seen when measured at the device with an oscilloscope. These humps are the result of the additional voltage seen at the instrument due to inductance in the cabling between the instrument and th

Equation 1

Equation 1

Equation 1 describes the relation between inductance and voltage. With this equation, we can see that for a given change in current over change in time (di over dt), the larger the inductance L is, the larger the resulting voltage will be. This equation also tells us that for a fixed inductance L, the larger the change in current or the smaller the change in time, the larger the resulting voltage will be. This means that the larger the pulse and or the faster the rise and falls times, the bigger the voltage humps will be.

To remedy this problem, instruments like SMUs offer remote voltage sensing, allowing them to measure around this lead inductance and measure the voltage directly at the device. However, as with excessive lead resistance, excessive lead inductance can also cause a problem for SMUs. If the inductance is large enough and causes the source-sense lead drop to exceed the SMU’s limit, transient pulse measurement data will have voltage measurement errors on the rising and falling edges similar to the ones seen when lead resistance is too large. Pulse I-V measurements are generally unaffected by lead inductance because the measurements are taken during the flat portion of the pulse where the current is not changing. However, excessive lead inductance will slow the rising and falling edges of voltage pulses and may cause ringing on current pulses, thereby requiring larger pulse widths to make a good settled pulse I-V measurement.

The Anatomy of a Pulse The amplitude and base describe the height of the pulse in the pulse waveform. Base describes the DC offset of the waveform from 0. This is the level the waveform will be both before and after the pulse. Amplitude is the level of the waveform relative to the base level and has an absolute value that is equal to the base plus amplitude. For example, a pulse waveform with a base of 1Vand an amplitude of 2V would have a low level of 1V and a high level of 3V. Pulse width is the time that the pulse signal is applied. It is commonly defined as the width in time of the pulse at half maximum also known as Full Width at Half Maximum (FWHM). This industry standard definition means the pulse width is measured where the pulse height is 50% of the amplitude. Pulse period is the length in time of the entire pulse waveform before it is repeated and can easily be measured by measuring the time from the start of one pulse to the next. The ratio of pulse width over pulse period is the duty cycle of the pulse waveform. A pulse’s rise time and fall time are the times it takes for the waveform to transition from the low level to the high level and from the high level back down to the low level. The industry standard way to measure the rise time is to measure the time it takes the pulse waveform to go from 10% amplitude to 90% amplitude on the rising edge. Fall time is defined as the time it takes for the waveform to go from 90% amplitude to 10% amplitude on the falling edge.

The Anatomy of a Pulse
The amplitude and base describe the height of the pulse in the pulse waveform. Base describes the DC offset of the waveform from 0. This is the level the waveform will be both before and after the pulse. Amplitude is the level of the waveform relative to the base level and has an absolute value that is equal to the base plus amplitude. For example, a pulse waveform with a base of 1Vand an amplitude of 2V would have a low level of 1V and a high level of 3V.
Pulse width is the time that the pulse signal is applied. It is commonly defined as the width in time of the pulse at half maximum also known as Full Width at Half Maximum (FWHM). This industry standard definition means the pulse width is measured where the pulse height is 50% of the amplitude.
Pulse period is the length in time of the entire pulse waveform before it is repeated and can easily be measured by measuring the time from the start of one pulse to the next.
The ratio of pulse width over pulse period is the duty cycle of the pulse waveform.
A pulse’s rise time and fall time are the times it takes for the waveform to transition from the low level to the high level and from the high level back down to the low level. The industry standard way to measure the rise time is to measure the time it takes the pulse waveform to go from 10% amplitude to 90% amplitude on the rising edge. Fall time is defined as the time it takes for the waveform to go from 90% amplitude to 10% amplitude on the falling edge.

Although SMUs are able to compensate for some lead inductance, PMUs have no compensation features, so the effects of inductance must be dealt with directly, such as by:

  • Reducing the size of the change in current by reducing the magnitude of the pulse.
  • Increasing the length of the transition times by increasing the rise and fall times.
  • Reducing the inductance in the test leads

Depending on the application or even the instrument, the first two measures are usually infeasible, which leaves reducing the inductance in the test leads. The amount of inductance in a set of test leads is proportionate to the loop area between the HI and LO leads. So, in order to reduce the inductance in the leads and therefore reduce the size of the humps, we must reduce the loop area, which is easily done by simply twisting the leads together to create a twisted pair or by using coaxial cable. Loop area can be reduced further by simply reducing the length of the cable.

The Semiconductor Industry Association (SIA), representing U.S. leadership in semiconductor manufacturing and design, today announced worldwide sales of semiconductors reached $27.8 billion for the month of February 2015, an increase of 6.7 percent from February 2014 when sales were $26.0 billion. Global sales from February 2015 were 2.7 percent lower than the January 2015 total of $28.5 billion, reflecting seasonal trends. Regionally, sales in the Americas increased by 17.1 percent compared to last February to lead all regional markets. All monthly sales numbers are compiled by the World Semiconductor Trade Statistics (WSTS) organization and represent a three-month moving average.

“The global semiconductor industry maintained momentum in February, posting its 22nd straight month of year-to-year growth despite macroeconomic headwinds,” said John Neuffer, president and CEO, Semiconductor Industry Association. “Sales of DRAM and Analog products were particularly strong, notching double-digit growth over last February, and the Americas market achieved its largest year-to-year sales increase in 12 months.”

Regionally, year-to-year sales increased in the Americas (17.1 percent) and Asia Pacific (7.6 percent), but decreased in Europe (-2.0 percent) and Japan (-8.8 percent). Sales decreased compared to the previous month in Europe (-1.6 percent), Asia Pacific (-2.2 percent), Japan (-2.3 percent), and the Americas (-4.4 percent).

“While we are encouraged by the semiconductor market’s sustained growth over the last two years, a key driver of our industry’s continued success is free trade,” Neuffer continued. “A legislative initiative called Trade Promotion Authority (TPA) has paved the way for opening markets to American goods and services for decades, helping to give life to nearly every U.S. free trade agreement in existence, but it expired in 2007. With several important free trade agreements currently under negotiation, Congress should swiftly re-enact TPA.”

February 2015
Billions
Month-to-Month Sales
Market Last Month Current Month % Change
Americas 6.51 6.23 -4.4%
Europe 2.95 2.90 -1.6%
Japan 2.62 2.56 -2.3%
Asia Pacific 16.47 16.10 -2.2%
Total 28.55 27.79 -2.7%
Year-to-Year Sales
Market Last Year Current Month % Change
Americas 5.32 6.23 17.1%
Europe 2.96 2.90 -2.0%
Japan 2.81 2.56 -8.8%
Asia Pacific 14.96 16.10 7.6%
Total 26.04 27.79 6.7%
Three-Month-Moving Average Sales
Market Sep/Oct/Nov Dec/Jan/Feb % Change
Americas 6.53 6.23 -4.6%
Europe 3.19 2.90 -9.2%
Japan 2.93 2.56 -12.7%
Asia Pacific 17.12 16.10 -6.0%
Total 29.77 27.79 -6.7%

“The LED market is a complex but promising market,” commented Pars Mukish, Business Unit Manager, LED, OLED and Sapphire at Yole Développement (Yole). In 2015, companies are not relying on more technical breakthroughs, except at the LED module level, where integration remains an important issue.

“However, there is still overcapacity,” said Mukish. “This is causing many changes in the supply chain, first at the chip level, then at the module/system level. The spin-off Royal Philips announced in July 2014 of its LED business, which grew from its acquisition of Lumileds in 2005, is one example.”

The LED industry’s complexity results from numerous technical issues, its many players and a multitude of lighting applications. Its promise comes thanks to especially large volume lighting opportunities, stresses Yole in its latest reports. Yole, the ‘More than Moore’ market research and strategy consulting company, foresees a global business reaching almost $516 million at the system level by 2016. (Source: LED in road and street lighting report, Jul. 2013, Yole Développement & Luxfit)

Today, LED technology’s average penetration rate is from 10-20 percent depending on geographic area. Each country has its own policy and has set up different measures to help LED implementation. For example, in Japan, penetration has reached 30 percent, thanks to government involvement.

Governmental measures are clearly welcome, as the technology is still considered expensive by the public. “Even though we saw a real breakthrough for LED technology from 2006 to 2014, upfront LED costs are still high compared to existing technologies,” explains Mukish. “Today, the real growth is in external lighting applications where LED technology is partially implemented. Commercial and industrial lighting players are also considering LED technology but today implementation is still developing.

In 2015, technical issues are different to previous years. They are mainly located at the LED module level. LED market leaders are therefore developing answers to packaging and integration needs. In the report entitled LED Packaging Technology and Market trends (Sept. 2014, Yole Développement), Yole has detailed the positive impact of advanced packaging technologies on LED manufacturing, especially LED packaging materials.

Mukish adds: “In 2015, we clearly see the value moving later in the supply chain. It was initially at the LED chip level, but we have identified strong investments at the module and system level to develop smart solutions in terms of packaging technologies and functionalities.” In this context, Yole is focusing its 2015 activities on analyzing new technologies at the LED module level. The company is investigating the impact on the supply chain and determining key players’ strategies (LED module, related technologies and equipment report: available mid-2015).

illus_ledactivities_yole_march2015

SEMI today announced an update of the SEMI World Fab Forecast report which updates outlooks for 2015 and 2016. The SEMI report reveals that fab equipment spending in 2014 increased almost 20 percent and will rise 15 percent in 2015, increasing only 2-4 percent in 2016. Since November 2014, SEMI has made 270 updates on its World Fab Forecast report, which tracks fab spending for construction and equipment, as well as capacity changes, and technology nodes transitions and product type changes by fab.

2013

2014

2015

2016

Fab equipment*

$29.4

$35.2

$40.5

$41 to $42

Change % Fab equipment

-10.0%

19.8%

15.0%

2% to 4%

Fab construction US$

$8.8

$7.7

$5.2

$6.9

Change % construction

13.6%

-11.0%

-32.0%

+32.0%

* Chart US$, in billions; Source: SEMI, March 2015

The SEMI World Fab Forecast and its related Fab Database reports track any equipment needed to ramp fabs, upgrade technology nodes, and expand or change wafer size, including new equipment, used equipment, or in-house equipment and spending on facilities for installation.

Fab spending, such as construction spending and equipment spending, are fractions of a company’s total capital expenditure (capex). Typically, if capex shows a trend to increase, fab spending will follow.  Capex for most of the large semiconductor companies is expected to increase by 8 percent in 2015, and grow another 3 percent in 2016. These increases are driven by new fab construction projects and also ramp of new technology nodes. Spending on construction projects, which typically represents new cleanroom projects, will experience a significant -32 percent decline in 2015, but is expected to rebound by 32 percent in 2016.

Comparing regions across the world, according to SEMI, the highest fab equipment spending in 2015 will occur in Taiwan, with US$ 11.9 billion, followed by Korea with US$ 9 billion.  The region with third largest spending, the Americas, is forecast to spend about US$ 7 billion.  Yet growth will decline in the Americas, by 12 percent in 2015, and decline by 12 percent in 2016 again.  Fourth in spending is China, with US$ 4.7 billion in 2015 and US$ 4.2 billion in 2016. In other regions, Japan’s spending will grow by about 6 percent in 2015, to US$ 4 billion; and 2 percent in 2016, to US$ 4.2 billion.  The Europe/Mideast region will see growth of about 20 percent (US$ 2.7 billion) in 2015 and over 30 percent (US$ 3.5 billion) in 2016. South East Asia is expected to grow by about 15 percent (US$ 1.3 billion) in 2015 and 70 percent (US$ 2.2 billion) in 2016.

2015 is expected to be the second consecutive year in equipment spending growth. SEMI’s positive outlook for the year is based on spending trends tracked as part of our fab investment research. The “bottom’s up” company-by-company and fab-by-fab approach points to strong investments by foundries and memory companies driving this year’s growth.

The SEMI World Fab Forecast Report lists over 40 facilities making DRAM products. Many facilities have major spending for equipment and construction planned for 2015.

Organic light emitting diodes (OLEDs), which are made from carbon-containing materials, have the potential to revolutionize future display technologies, making low-power displays so thin they’ll wrap or fold around other structures, for instance.

Conventional LCD displays must be backlit by either fluorescent light bulbs or conventional LEDs whereas OLEDs don’t require back lighting. An even greater technological breakthrough will be OLED-based laser diodes, and researchers have long dreamed of building organic lasers, but they have been hindered by the organic materials’ tendency to operate inefficiently at the high currents required for lasing.

Now a new study from a team of researchers in California and Japan shows that OLEDs made with finely patterned structures can produce bright, low-power light sources, a key step toward making organic lasers. The results are reported in a paper appearing this week on the cover of the journal Applied Physics Letters, from AIP Publishing.

The key finding, the researchers say, is to confine charge transport and recombination to nanoscale areas, which extends electroluminescent efficiency roll off the current density at which the efficiency of the OLEDs dramatically decreases — by almost two orders of magnitude. The new device structures do this by suppressing heating and preventing charge recombination.

“An important effect of suppressing roll-off is an increase in the efficiency of devices at high brightness,” said Chihaya Adachi of Kyushu University, who is a co-author of the paper. “This results in lower power to obtain the same brightness.”

“For years scientists working in organic semiconductors have dreamed of making electrically-driven organic lasers,” said Thuc-Quyen Nguyen of the University of California, Santa Barbara, another co-author. “Lasers operate in extreme conditions with electric currents that are significantly higher than those used in common displays and lighting. At these high currents, energy loss processes become stronger and make lasing difficult.

“We see this work, which reduces some loss processes, as one step on the road toward realizing organic lasers,” Nguyen added.

How OLEDs Work

OLEDs operate through the interaction of electrons and holes. “As a simple visualization,” Adachi said, “one can think of an organic semiconductor as a subway train with someone sitting in every seat. The seats represent molecules and the people represent energetic particles, i.e., electrons. When people board the train from one end, they have extra energy and want to go to the relaxed state of sitting. As people board, some of the seated people rise and exit the train at the other end leaving empty seats, or ‘holes,’ for the standing people to fill. When a standing person sits, the person goes to a relaxed state and releases energy. In the case of OLEDs, the person releases the energy as light.”

Production of OLED-based lasers requires current densities of thousands of amperes per square centimeter (kA/cm2), but until now, current densities have been limited by heating. “At high current densities, brightness is limited by annihilation processes,” Adachi said. “Think of large numbers of people on the train colliding into each other and losing energy in ways other than by sitting and releasing light.”

In previous work, Adachi and colleagues showed OLED performance at current densities over 1 kA/cm2 but without the necessary efficiency required for lasers and bright lighting. In their current paper, they show that the efficiency problem can be solved by using electron-beam lithography to produce finely-patterned OLED structures. The small device area supports charge density injection of 2.8 kA/cm2 while maintaining 100 times higher luminescent efficiency than previously observed. “In our device structure, we have effectively confined the entrance and exit to the middle of the train. People diffuse to the two less crowded ends of the train and reduce collisions and annihilation.”

Pulsed measurements are defined in Part 1, and common pulsed measurement challenges are discussed in Part 2.

By DAVID WYBAN, Keithley Instruments, a Tektronix Company, Solon, Ohio

Performing a DC measurement starts with applying the test signal (typically a DC voltage), then waiting long enough for all the transients in the DUT and the test system to settle out. The measurements themselves are typically performed using a sigma-delta or integrating-type analog-to-digital converter (ADC). The conversion takes place over one or more power line cycles to eliminate noise in the measurements due to ambient power line noise in the test environment. Multiple measurements are often averaged to increase accuracy. It can take 100ms or longer to acquire a single reading using DC measurement techniques.

In contrast, pulsed measurements are fast. The test signal is applied only briefly before the signal is returned to some base level. To fit measurements into these short windows, sigma-delta ADCs are run at sub-power-line interval integration times; sometimes, the even faster successive approximation register (SAR) type ADCs are used. Because of these high speeds, readings from pulsed measurements are noisier than readings returned by DC measurements. However, in on-wafer semiconductor testing, pulse testing techniques are essential to prevent device damage or destruction. Wafers have no heat sinking to pull away heat generated by current flow; if DC currents were used, the heat would increase rapidly until the device was destroyed. Pulse testing allows applying test signals for very short periods, avoiding this heat buildup and damage.

Why use pulsed measurements?

The most common reason for using pulsed measurements is to reduce joule heating (i.e., device self-heating). When a test signal is applied to a DUT, the device consumes power and turns it into heat, increasing the device’s temperature. The longer that power is applied, the hotter the device becomes, which affects its electrical characteristics. If a DUT’s temperature can’t be kept constant, it can’t be characterized accurately. However, with pulsed testing, power is only applied to the DUT briefly, minimizing self-heating. Duty cycles of 1 percent or less are recommended to reduce the average power dissipated by the device over time. Pulsed measurements are designed to minimize the power applied to the device so much that its internal temperature rise is nearly zero, so heating will have little or no effect on the measurements.

Because they minimize joule heating, pulsed measurements are widely used in nanotechnology research, such as when characterizing delicate materials and structures like CNT FETs, semiconductor nanowires, graphene-based devices, molecular- based electronics and MEMs structures. The heat produced with traditional DC measurement techniques could easily alter or destroy them.

To survive high levels of continuous DC power, devices like MOSFETs and IGBTs require packaging with a solid metal backing and even heat-sinking. However, during the early stages of device development, packaging these experimental devices would be much too costly and time consuming, so early testing is performed at the wafer level. Because pulsed testing minimizes the power applied to a device, it allows for complete characterization of these devices on the probe station, reducing the cost of test.

The reduction in joule heating that pulsed testing allows also simplifies the process of characterizing devices at varying temperatures. Semiconductor devices are typically so small that it is impossible
to measure their temperature directly with a probe. With pulsed measurements, however, the self- heating of the device can be made so insignificant that its internal temperature can be assumed to be equal to the surrounding ambient temperature. To characterize the device at a specific temperature, simply change the surrounding ambient temperature with a thermal chamber or temperature-controlled heat sink. Once the device has reached thermal equilibrium at the new ambient temperature, repeat the pulsed measurements to characterize the device at the new temperature.

Pulsed measurements are also useful for extending instruments’ operating boundaries. A growing number of power semiconductor devices are capable of operating at 100A or higher, but building an instrument capable of sourcing this much DC current would be prohibitive. However, when delivering pulse mode power, these high power outputs are only for very short intervals, which can be done by storing the required energy from a smaller power supply within capacitors and delivering it all in one short burst. This allows instruments like the Model 2651A High Power SourceMeter SMU instrument to combine sourcing up to 50A with precision current and voltage measurements.

Pulsed I-V vs. transient measurements

Pulsed measurements come in two forms, pulsed I-V and transient. Pulsed I-V (FIGURE 1) is a technique for gathering DC-like current vs. voltage curves using pulses rather than DC signals. In the pulsed I-V technique, the current and voltage is measured near the end of the flat top of the pulse, before the falling edge. In this technique, the shape of the pulse is extremely important because it determines the quality of the measurement. If the top of the pulse has not settled before this measurement is taken, the resulting reading will be noisy and or incorrect. Sigma-delta or integrating ADCs should be configured to perform their conversion over as much of this flat top as possible to maximize accuracy and reduce measurement noise.

FIGURE 1. Pulse I-V technique.

FIGURE 1. Pulse I-V technique.

Two techniques can improve the accuracy of pulsed I-V measurements. If the width of the pulse and measurement speed permit, multiple measurements made during the flat portion of the pulse can be averaged together to create a “spot mean” measurement. This technique is commonly employed with instruments that use high speed Summation Approximation Register (SAR) ADCs, which perform conversions quickly, often at rates of 1μs per sample or faster, thereby sacrificing resolution for speed. At these high speeds, many samples can be made during the flat portion of the pulse. Averaging as many samples as possible enhances the resolution of the measurements and reduces noise. Many instruments have averaging filters that can be used to produce a single reading. If even greater accuracy is required, the measurement can be repeated over several pulses and the readings averaged to get a single reading. To obtain valid results using this method, the individual pulsed measurements should be made in quick succession to avoid variations in the readings due to changes in temperature or humidity.

Transient pulsed measurements (FIGURE 2) are performed by sampling the signal at high speed to create a signal vs. time waveform. An oscilloscope is often used for these measurements but they can also be made with traditional DC instruments by running the ADCs at high speed. Some DC instruments even include high-speed SAR type ADCs for performing transient pulsed measurements. Transient measurements are useful for investigating device behaviors like self-heating and charge trapping.

FIGURE 2. Transient pulse measurements.

FIGURE 2. Transient pulse measurements.

Instrumentation options

The simplest pulse measurement instrumentation option is a pulse generator to source the pulse combined with an oscilloscope to measure the pulse (FIGURE 3). Voltage measurements can be made by connecting a probe from the scope directly to the DUT; current measurements can be made by connecting a current probe around one of the DUT test leads. If a current probe is unavailable, a precision shunt resistor can be placed in series with the device and the voltage across the shunt measured with a standard probe, then converted to current using a math function in the scope. This simple setup offers a variety of advantages. Pulse generators provide full control over pulse width, pulse period, rise time and fall time. They are capable of pulse widths as narrow as 10 nanoseconds and rise and fall times as short as 2-3 nanoseconds. Oscilloscopes are ideal for transient pulse measurements because of their ability to sample the signal at very high speeds.

FIGURE 3. Pulse measurement using a pulse generator and an oscilloscope. Voltage is measured across the device with a voltage probe and current through the device is measured with a current probe.

FIGURE 3. Pulse measurement using a pulse generator and an oscilloscope. Voltage is measured across the device with a voltage probe and current through the device is measured with a current probe.

Although a simple pulse generator/oscilloscope combination is good for fast transient pulse measurements, it’s not appropriate for all pulse measurement applications. A scope’s measurement resolution is relatively low (8–12 bits). Because scopes are designed to capture waveforms, they’re not well suited for making pulse I-V measurements. Although the built-in pulse measure functions can help with measuring the level of a pulse, this represents only a single point on the I-V curve. Generating a complete curve with this setup would be time consuming, requiring either manual data collection or a lot of programming. Pulse generators are typically limited to outputting 10-20V max with a current delivery capability of only a couple hundred milliamps, which would limit this setup to lower power devices and/or lower power tests. Test setup can also be complex. Getting the desired voltage at the device requires impedance matching with the pulse generator. If a shunt resistor is used to measure current, then the voltage drop across this resistor must be taken into account as well.

Curve tracers were all-in-one instruments designed specifically for I-V characterization of 2- and 3-terminal power semiconductor devices. They featured high current and high voltage supplies for stimulating the device and a configurable voltage/ current source for stimulating the device’s control terminal, a built-in test fixture for making connections, a scope like display for real-time feedback, and a knob for controlling the magnitude of the output. However, Source measure unit (SMU) instruments (FIGURE 4) have now largely taken up the functions they once performed.

FIGURE 4. Model 2620B System SourceMeter SMU instrument.

FIGURE 4. Model 2620B System SourceMeter SMU instrument.

SMU instruments combine the source capabilities of a precision power supply with the measurement capabilities of a high accuracy DMM. Although originally designed for making extremely accurate DC measurements, SMU instruments have been enhanced to include pulse measurement capabilities as well. These instruments can source much higher currents in pulse mode than in DC mode. For example, the Keithley Model 2602B SourceMeter SMU instrument can output up to 3A DC and up to 10A pulsed. For applications that require even high currents, the Model 2651A SourceMeter SMU instrument can output up 20A DC or 50A pulsed. If two Model 2651As are configured in parallel, pulse current outputs up to 100A are possible.

SMU instruments can source both voltage and current with high accuracy thanks to an active feedback loop that monitors the output and adjusts it as necessary to achieve the programmed output value. They can even sense voltage remotely, directly at the DUT, using a second set of test leads, ensuring the correct voltage at the device. These instruments measure with high precision as well, with dual 28-bit delta-sigma or integrating-type ADCs. Using these ADCs along with their flexible sourcing engines, SMUs can perform very accurate pulse I-V measurement sweeps to characterize devices. Some, including the Model 2651A, also include two SAR-type ADCs that can sample at 1 mega-sample per second with 18-bit resolution, making them excellent for transient pulse measurements as well.

In addition, some SMU instruments offer excellent low current capability, with ranges as low as 100pA with 100aA resolution. Their wide dynamic range makes SMU instruments an excellent choice for both ON- and OFF-state device characterization. Also, because they combine sourcing and measurement in a single instrument, SMU instruments reduce the number of instruments involved, which not only simplifies triggering and programming but reduces the overall cost of test.

Although SMU instruments are often used for pulse measurements, they don’t operate in the same way as a typical pulse generator. For example, an SMU instrument’s rise and fall times cannot be controlled by the user; they depend on the instrument’s gain and bandwidth of the feedback loop. Because these loops are designed to generate little or no overshoot when stepping the source, the minimum width of the pulses they produce are not as short as those possible from a pulse generator. However, an SMU instrument can produce pulse widths as short as 50–100μs, which minimizes device self-heating.

The terminology used to describe a pulse when using SMU instruments differs slightly from that used with pulse generators. Rather than referring to the output levels in the pulse as amplitude and base or the high level and the low level, with SMU instruments, the high level is referred to as the pulse level and the low level as the bias level. The term bias level originates from the SMU’s roots in DC testing where one terminal of a device might be biased with a fixed level. Pulse width is still used with SMU instruments, but its definition is slightly different. Given that rise and fall times cannot be set directly and vary with the range in use and the load connected to the output, pulse width can’t be accurately defined by Full Width at Half Maximum (FWHM). (refer to the sidebar for more information on FWHM). Instead, for most SMU instruments, pulse width is defined as the time from the start of the rising edge to the start of the falling edge, points chosen because they are under the user’s control.

In other words, the user can set the pulse width by setting the time between when the source is told to go to the pulse level and then told to go back to the bias level.

FIGURE 5. A pulse measure unit card combines the capabilities of a pulse generator and a high resolution oscilloscope.

FIGURE 5. A pulse measure unit card combines the capabilities of a pulse generator and a high resolution oscilloscope.

Pulse measure units (PMUs) combine the capabilities of a pulse generator and a high-resolution oscilloscope, which are sometimes implemented as card-based solutions designed to plug into a test mainframe. Keithley’s Model 4225-PMU, designed for use with the Model 4200 Semiconductor Charac- terization System (FIGURE 5), is one example. It has two independent channels capable of sourcing up to 40V at up to 800mA. Like a standard pulse generator, users can define all parameters of the pulse shape. Pulse widths as narrow as 60ns and rise and fall times as short as 20ns make it well suited for characterizing devices with fast transients. A Segment Arb mode allows outputting multi-level pulse waveforms in separately defined segments, with separate voltage levels and durations for each. Each PMU channel is capable of measuring both current and voltage using two 14-bit 200MS/s ADCs per channel for a total of four ADCs per card. Additionally, all four ADCs are capable of sampling together synchronously at full speed. By combining a pulse generator with scope- like measurement capability in one instrument, a PMU can not only make high-resolution transient pulse measurements but also perform pulse I-V measurement sweeps easily using a spot mean method for enhanced resolution.

EGBERT WOELK, PH.D., is director of marketing at Dow Electronic Materials, North Andover, MA. ROGER LOO, PH.D., is a principal scientist at imec, Leuven, Belgium.

Now established in UV curing, UV LED technology will find growth opportunities in disinfection and purification and new applications by 2017/2018. Under its new technology and market analysis entitled “UV LED – Technology, Manufacturing and Application Trends”, Yole Développement (Yole) the “More Than Moore” market research, technology and strategy consulting company, reviews and details the traditional UV lamp business and its current transition to UV LED technology. Indeed industry players confirm their interest for cheaper and more compact technology.

UV LED market

Yole’s report presents a comprehensive review of all UV lamp applications including a deep analysis of UV curing, UV purification and disinfection and analytical instruments. It highlights the UV LED working principle, market structure, UV LED market drivers and associated challenges and characteristics, the total accessible market for UV LEDs. In this report, Yole’s analysts also detail the market volume and size metrics for traditional UV lamps and UV LEDs over the period 2008-2019, with splits by application for each technology.

Thanks to their compactness and low cost of ownership, UV LED technology continues to make its way in the booming UV curing business, through replacement of incumbent technologies such as mercury lamps.

“Thanks to this an overall UV LED market that represented only ~$20M in 2008 grew to~$90M in 2014, at a compound annual growth rate of 28.5 percent,” explains Pars Mukish, Business Unit Manager, LED activities at Yole. Such growth is likely to continue as LED-powered UV curing spreads across ink, adhesive and coating industries. And Pars Mukish from Yole, explains: “By 2017/2018, the UV LED market should also see part of its revenues coming from UVC disinfection and purification applications, for which device performance is not yet sufficient. The UV LED business is therefore expected to grow from ~$90M in 2014 to ~$520M in 2019.”This market’s evaluation takes into account only standard applications, where UV LEDs replace UV lamps.
Pars Mukish adds: “The potential is even greater, if we consider UV LEDs’ ability to enable new concepts in areas like general lighting, horticultural lighting, biomedical devices, and in fighting hospital-acquired infections (HAIs).”

Even this is just scratching the surface of UV LEDs’ real potential. While the new applications do not yet have a strong impact on market size, Yole expects them to possibly count for nearly 10 percent of the total UV LED market size by 2019.

In 2008, Yole started its investigation on the UV LEDs technologies. The consulting company highlights: “Less than ten companies were developing and manufacturing these devices at this time.” Since then, more than 50 companies have entered the playground, over 30 of these between 2012 and 2014, mostly attracted by the high margin when the overcapacity and strong price pressure from the “LED TV crisis” had taken its toll on the visible LED industry. These were mostly small and medium enterprises.

And recently some big companies from the visible LED industry – namely Philips Lumileds and LG Innotek – have also secured a foothold in the UV LED business. According to Yole’s analysis, the entry of these two giants will help to further develop the industry, the market and the technology based on their strong experience of the visible LED industry.

A good example of this is that they have made a nearly full transition of their process to 6” sapphire substrates. “Compared to a 2” based process, this can provide at least a 30 percent overall productivity increase, which would help to further reduce manufacturing cost…” comments Pars Mukish.

Leading industry experts provide their perspectives on what to expect in 2015. 3D devices and 3D integration, rising process complexity and “big data” are among the hot topics.

Entering the 3D era

Ghanayem_SSteve Ghanayem, vice president, general manager, Transistor and Interconnect Group, Applied Materials

This year, the semiconductor industry celebrates the 50th anniversary of Moore’s Law. We are at the onset of the 3D era. We expect to see broad adoption of 3D FinFETs in logic and foundry. Investments in 3D NAND manufacturing are expanding as this technology takes hold. This historic 3D transformation impacting both logic and memory devices underscores the aggressive pace of technology innovation in the age of mobility. The benefits of going 3D — lower power consumption, increased processing performance, denser storage capacity and smaller form factors — are essential for the industry to enable new mobility, connectivity and Internet of Things applications.

The semiconductor equipment industry plays a major role in enabling this 3D transformation through new materials, capabilities and processes. Fabricating leading-edge 3D FinFET and NAND devices adds complexity in chip manufacturing that has soared with each node transition. The 3D structure poses unique challenges for deposition, etch, planarization, materials modification and selective processes to create a yielding device, requiring significant innovations in critical dimension control, structural integrity and interface preparation. As chips get smaller and more complex, variations accumulate while process tolerances shrink, eroding performance and yields. Chipmakers need cost-effective solutions to rapidly ramp device yield to maintain the cadence of Moore’s Law. Given these challenges, 2015 will be the year when precision materials engineering technologies are put to the test to demonstrate high-volume manufacturing capabilities for 3D devices.

Achieving excellent device performance and yield for 3D devices demands equipment engineering expertise leveraging decades of knowledge to deliver the optimal system architecture with wide process window. Process technology innovation and new materials with atomic-scale precision are vital for transistor, interconnect and patterning applications. For instance, transistor fabrication requires precise control of fin width, limiting variation from etching to lithography. Contact formation requires precision metal film deposition and atomic-level interface control, critical to lowering contact resistance. In interconnect, new materials such as cobalt are needed to improve gap fill and reliability of narrow lines as density increases with each technology node. Looking forward, these precision materials engineering technologies will be the foundation for continued materials-enabled scaling for many years to come.

Increasing process complexity and opportunities for innovation

trafasBrian Trafas, Chief Marketing Officer, KLA-Tencor Corporation

The 2014 calendar year started with promise and optimism for the semiconductor industry, and it concluded with similar sentiments. While the concern of financial risk and industry consolidation interjects itself at times to overshadow the industry, there is much to be positive about as we arrive in the new year. From increases in equipment spending and revenue in the materials market, to record level silicon wafer shipments projections, 2015 forecasts all point in the right direction. Industry players are also doing their part to address new challenges, creating strategies to overcome complexities associated with innovative techniques, such as multipatterning and 3D architectures.

The semiconductor industry continues to explore new technologies, including 3DIC, TSV, and FinFETs, which carry challenges that also happen to represent opportunities. First, for memory as well as foundry logic, the need for multipatterning to extend lithography is a key focus. We’re seeing some of the value of a traditional lithography tool shifting into some of the non-litho processing steps. As such, customers need to monitor litho and non-litho sources of error and critical defects to be able to yield successfully at next generation nodes.  To enable successful yields with decreasing patterning process windows, it is essential to address all sources of error to provide feed forward and feed backward correctly.

The transition from 2D to 3D in memory and logic is another focus area.  3D leads to tighter process margins because of the added steps and complexity.  Addressing specific yield issues associated with 3D is a great opportunity for companies that can provide value in addressing the challenges customers are facing with these unique architectures.

The wearable, intelligent mobile and IoT markets are continuing to grow rapidly and bring new opportunities. We expect the IoT will drive higher levels of semiconductor content and contribute to future growth in the industry. The demand for these types of devices will add to the entire value chain including semiconductor devices but also software and services.  The semiconductor content in these devices can provide growth opportunities for microcontrollers and embedded processors as well sensing semiconductor devices.

Critical to our industry’s success is tight collaboration among peers and with customers. With such complexity to the market and IC technology, it is very important to work together to understand challenges and identify where there are opportunities to provide value to customers, ultimately helping them to make the right investments and meet their ramps.

Controlling manufacturing variability key to success at 10nm

Rick_Gottscho_Lam_ResearchRichard Gottscho, Ph.D., Executive Vice President, Global Products, Lam Research Corporation 

This year, the semiconductor industry should see the emergence of chip-making at the 10nm technology node. When building devices with geometries this small, controlling manufacturing process variability is essential and most challenging since variation tolerance scales with device dimensions.

Controlling variability has always been important for improving yield and device performance. With every advance in technology and change in design rule, tighter process controls are needed to achieve these benefits. At the 22/20nm technology node, for instance, variation tolerance for CDs (critical dimensions) can be as small as one nanometer, or about 14 atomic layers; for the 10nm node, it can be less than 0.5nm, or just 3 – 4 atomic layers. Innovations that drive continuous scaling to sub-20nm nodes, such as 3D FinFET devices and double/quadruple patterning schemes, add to the challenge of reducing variability. For example, multiple patterning processes require more stringent control of each step because additional process steps are needed to create the initial mask:  more steps mean more variability overall. Multiple patterning puts greater constraints not only on lithography, but also on deposition and etching.

Three types of process variation must be addressed:  within each die or integrated circuit at an atomic level, from die to die (across the wafer), and from wafer to wafer (within a lot, lot to lot, chamber to chamber, and fab to fab). At the device level, controlling CD variation to within a few atoms will increasingly require the application of technologies such as atomic layer deposition (ALD) and atomic layer etching (ALE). Historically, some of these processes were deemed too slow for commercial production. Fortunately, we now have cost-effective solutions, and they are finding their way into volume manufacturing.

To complement these capabilities, advanced process control (APC) will be incorporated into systems to tune chemical and electrical gradients across the wafer, further reducing die-to-die variation. In addition, chamber matching has never been more important. Big data analytics and subsystem diagnostics are being developed and deployed to ensure that every system in a fab produces wafers with the same process results to atomic precision.

Looking ahead, we expect these new capabilities for advanced variability control to move into production environments sometime this year, enabling 10nm-node device fabrication.

2015: The year 3D-IC integration finally comes of age

SONY DSCPaul Lindner, Executive Technology Director, EV Group

2015 will mark an important turning point in the course of 3D-IC technology adoption, as the semiconductor industry moves 3D-IC fully out of development and prototyping stages onto the production floor. In several applications, this transition is already taking place. To date, at least a dozen components in a typical smart phone employing 3D-IC manufacturing technologies. While the application processor and memory in these smart devices continue to be stacked at a package level (POP), many other device components—including image sensors, MEMS, RF front end and filter devices—are now realizing the promise of 3D-IC, namely reduced form factor, increased performance and most importantly reduced manufacturing cost.

The increasing adoption of wearable mobile consumer products will also accelerate the need for higher density integration and reduced form factor, particularly with respect to MEMS devices. More functionality will be integrated both within the same device as well as within one package via 3D stacking. Nine-axis international measurement units (IMUs, which comprise three accelerometers, three gyroscopes and three magnetic axes) will see reductions in size, cost, power consumption and ease of integration.

On the other side of the data stream at data centers, expect to see new developments around 3D-IC technology coming to market in 2015 as well. Compound semiconductors integrated with photonics and CMOS will trigger the replacement of copper wiring with optical fibers to drive down power consumption and electricity costs, thanks to 3D stacking technologies. The recent introduction of stacked DRAM with high-performance microprocessors, such as Intel’s Knights Landing processor, already demonstrate how 3D-IC technology is finally delivering on its promises across many different applications.

Across these various applications that are integrating stacked 3D-IC architectures, wafer bonding will play a key role. This is true for 3D-ICs integrating through silicon vias (TSVs), where temporary bonding in the manufacturing flow or permanent bonding at the wafer-level is essential. It’s the case for reducing power consumption in wearable products integrating MEMS devices, where encapsulating higher vacuum levels will enable low-power operation of gyroscopes. Finally, wafer-level hybrid fusion bonding—a technology that permanently connects wafers both mechanically and electrically in a single process step and supports the development of thinner devices by eliminating adhesive thickness and the need for bumps and pillars—is one of the promising new processes that we expect to see utilized in device manufacturing starting in 2015.

2015: Curvilinear Shapes Are Coming

Aki_Fujimura_D2S_midresAki Fujimura, CEO, D2S

For the semiconductor industry, 2015 will be the start of one of the most interesting periods in the history of Moore’s Law. For the first time in two decades, the fundamental machine architecture of the mask writer is going to change over the next few years—from Variable Shaped Beam (VSB) to multi-beam. Multi-beam mask writing is likely the final frontier—the technology that will take us to the end of the Moore’s Law era. The write times associated with multi-beam writers are constant regardless of the complexity of the mask patterns, and this changes everything. It will open up a new world of opportunities for complex mask making that make trade-offs between design rules, mask/wafer yields and mask write-times a thing of the past. The upstream effects of this may yet be underappreciated.

While high-volume production of multi-beam mask writing machines may not arrive in time for the 10nm node, the industry is expressing little doubt of its arrival by the 7nm node. Since transitions of this magnitude take several years to successfully permeate through the ecosystem, 2015 is the right time to start preparing for the impact of this change.  Multi-beam mask writing enables the creation of very complex mask shapes (even ideal curvilinear shapes). When used in conjunction with optical proximity correction (OPC), inverse lithography technology (ILT) and pixelated masks, this enables more precise wafer writing with improved process margin.  Improving process margin on both the mask and wafer will allow design rules to be tighter, which will re-activate the transistor-density benefit of Moore’s Law.

The prospect of multi-beam mask writing makes it clear that OPC needs to yield better wafer quality by taking advantage of complex mask shapes. This clear direction for the future and the need for more process margin and overlay accuracy at the 10nm node aligns to require complex mask shapes at 10nm. Technologies such as model-based mask data preparation (MB-MDP) will take center stage in 2015 as a bridge to 10nm using VSB mask writing.

Whether for VSB mask writing or for multi-beam mask writing, the shapes we need to write on masks are increasingly complex, increasingly curvilinear, and smaller in minimum width and space. The overwhelming trend in mask data preparation is the shift from deterministic, rule-based, geometric, context-independent, shape-modulated, rectangular processing to statistical, simulation-based, context-dependent, dose- and shape-modulated, any-shape processing. We will all be witnesses to the start of this fundamental change as 2015 unfolds. It will be a very exciting time indeed.

Data integration and advanced packaging driving growth in 2015

mike_plisinski_hiMike Plisinski, Chief Operating Officer, Rudolph Technologies, Inc.

We see two important trends that we expect to have major impact in 2015. The first is a continuing investment in developing and implementing 3D integration and advanced packaging processes, driven not only by the demand for more power and functionality in smaller volumes, but also by the dramatic escalation in the number and density I/O lines per die. This includes not only through silicon vias, but also copper pillar bumps, fan-out packaging, hyper-efficient panel-based packaging processes that use dedicated lithography system on rectangular substrates. As the back end adopts and adapts processes from the front end, the lines that have traditionally separated these areas are blurring. Advanced packaging processes require significantly more inspection and control than conventional packaging and this trend is still only in its early stages.

The other trend has a broader impact on the market as a whole. As consumer electronics becomes a more predominant driver of our industry, manufacturers are under increasing pressure to ramp new products faster and at higher volumes than ever before. Winning or losing an order from a mega cell phone manufacturer can make or break a year, and those orders are being won based on technology and quality, not only price as in the past. This is forcing manufacturers to look for more comprehensive solutions to their process challenges. Instead of buying a tool that meets certain criteria of their established infrastructure, then getting IT to connect it and interpret the data and write the charts and reports for the process engineers so they can use the tool, manufacturers are now pushing much of this onto their vendors, saying, “We want you to provide a working tool that’s going to meet these specs right away and provide us the information we need to adjust and control our process going forward.” They want information, not just data.

Rudolph has made, and will continue to make, major investments in the development of automated analytics for process data. Now more than ever, when our customer buys a system from us, whatever its application – lithography, metrology, inspection or something new, they also want to correlate the data it generates with data from other tools across the process in order to provide more information about process adjustments. We expect these same customer demands to drive a new wave of collaboration among vendors, and we welcome the opportunity to work together to provide more comprehensive solutions for the benefit of our mutual customers.

Process Data – From Famine to Feast

Jack Hager Head ShotJack Hager, Product Marketing Manager, FEI

As shrinking device sizes have forced manufacturers to move from SEM to TEM for analysis and measurement of critical features, process and integration engineers have often found themselves having to make critical decisions using meagre rations of process data. Recent advances in automated TEM sample preparation, using FIBs to prepare high quality, ultra-thin site-specific samples, have opened the tap on the flow of data. Engineers can now make statistically-sound decisions in an environment of abundant data. The availability of fast, high-quality TEM data has whet their appetites for even more data, and the resulting demand is drawing sample preparation systems, and in some cases, TEMs, out of remote laboratories and onto the fab floor or in a “near-line” location. With the high degree of automation of both the sample preparation and TEM, the process engineers, who ultimately consume the data, can now own and operate the systems that generate this data, thus having control over the amount of data created.

The proliferation of exotic materials and new 3D architectures at the most advanced nodes has dramatically increased the need for fast, accurate process data. The days when performance improvements required no more than a relatively simple “shrink” of basically 2D designs using well-understood processes are long gone. Complex, new processes require additional monitoring to aide in process control and failure analysis troubleshooting. Defects, both electrical and physical, are not only more numerous, but typically smaller and more varied. These defects are often buried below the exposed surface which limits traditional inline defect-monitoring equipment effectiveness. This has resulted in renewed challenges in diagnosing their root causes. TEM analysis now plays a more prevalent role providing defect insights that allow actionable process changes.

While process technologies have changed radically, market fundamentals have not. First to market still commands premium prices and builds market share. And time to market is determined largely by the speed with which new manufacturing processes can be developed and ramped to high yields at high volumes. It is in these critical phases of development and ramp that the speed and accuracy of automated sample preparation and TEM analysis is proving most valuable. The methodology has already been adopted by leading manufacturers across the industry – logic and memory, IDM and foundry. We expect the adoption to continue, and with it, the migration of sample preparation and advanced measurement and analytical systems into the fab. 

Diversification of processes, materials will drive integration and customization in sub-fab

Kate Wilson PhotoKate Wilson, Global Applications Director, Edwards

We expect the proliferation of new processes, materials and architectures at the most advanced nodes to drive significant changes in the sub fab where we live. In particular, we expect to see a continuing move toward the integration of vacuum pumping and abatement functions, with custom tuning to optimize performance for the increasingly diverse array of applications becoming a requirement. There is an increased requirement for additional features around the core units such as thermal management, heated N2 injection, and precursor treatment pre- and post-pump that also need to be managed.

Integration offers clear advantages, not only in cost savings but also in safety, speed of installation, smaller footprint, consistent implementation of correct components, optimized set-ups and controlled ownership of the process effluents until they are abated reliably to safe levels. The benefits are not always immediately apparent. Just as effective integration is much more than simply adding a pump to an abatement system, the initial cost of an integrated system is more than the cost of the individual components. The cost benefits in a properly integrated system accrue primarily from increased efficiencies and reliability over the life of the system, and the magnitude of the benefit depends on the complexity of the process. In harsh applications, including deposition processes such as CVD, Epi and ALD, integrated systems provide significant improvements in uptime, service intervals and product lifetimes as well as significant safety benefits.

The trend toward increasing process customization impacts the move toward integration through its requirement that the integrator have detailed knowledge of the process and its by-products. Each manufacturer may use a slightly different recipe and a small change in materials or concentrations can have a large effect on pumping and abatement performance. This variability must be addressed not only in the design of the integrated system but also in tuning its operation during initial commissioning and throughout its lifetime to achieve optimal performance. Successful realization of the benefits of integration will rely heavily on continuing support based on broad application knowledge and experience.

Giga-scale challenges will dominate 2015

Dr. Zhihong Liu

Dr. Zhihong Liu, Executive Chairman, ProPlus Design Solutions, Inc.

It wasn’t all that long ago when nano-scale was the term the semiconductor industry used to describe small transistor sizes to indicate technological advancement. Today, with Moore’s Law slowing down at sub-28nm, the term more often heard is giga-scale due to a leap forward in complexity challenges caused in large measure by the massive amounts of big data now part of all chip design.

Nano-scale technological advancement has enabled giga-sized applications for more varieties of technology platforms, including the most popular mobile, IoT and wearable devices. EDA tools must respond to such a trend. On one side, accurately modeling nano-scale devices, including complex physical effects due to small geometry sizes and complicated device structures, has increased in importance and difficulties. Designers now demand more from foundries and have higher standards for PDK and model accuracies. They need to have a deep understanding of the process platform in order to  make their chip or IP competitive.

On the other side, giga-scale designs require accurate tools to handle increasing design size. The small supply voltage associated with technology advancement and low-power applications, and the impact of various process variation effects, have reduced available design margins. Furthermore, the big circuit size has made the design sensitive to small leakage current and small noise margin. Accuracy will soon become the bottleneck for giga-scale designs.

However, traditional design tools for big designs, such as FastSPICE for simulation and verification, mostly trade-off accuracy for capacity and performance. One particular example will be the need for accurate memory design, e.g., large instance memory characterization, or full-chip timing and power verification. Because embedded memory may occupy more than 50 percent of chip die area, it will have a significant impact on chip performance and power. For advanced designs, power or timing characterization and verification require much higher accuracy than what FastSPICE can offer –– 5 percent or less errors compared to golden SPICE.

To meet the giga-scale challenges outlined above, the next-generation circuit simulator must offer the high accuracy of a traditional SPICE simulator, and have similar capacity and performance advantages of a FastSPICE simulator. New entrants into the giga-scale SPICE simulation market readily handle the latest process technologies, such as 16/14nm FinFET, which adds further challenges to capacity and accuracy.

One giga-scale SPICE simulator can cover small and large block simulations, characterization, or full-chip verifications, with a pure SPICE engine that guarantees accuracy, and eliminates inconsistencies in the traditional design flow.  It can be used as the golden reference for FastSPICE applications, or directly replace FastSPICE for memory designs.

The giga-scale era in chip design is here and giga-scale SPICE simulators are commercially available to meet the need.

Scientists at UCL, in collaboration with groups at the University of Bath and the Daresbury Laboratory, have uncovered the mystery of why blue light-emitting diodes (LEDs) are so difficult to make, by revealing the complex properties of their main component – gallium nitride – using sophisticated computer simulations.

Blue LEDs were first commercialised two decades ago and have been instrumental in the development of new forms of energy saving lighting, earning their inventors the 2014 Nobel Prize in Physics. Light emitting diodes are made of two layers of semiconducting materials (insulating materials which can be made conduct electricity in special circumstances). One has mobile negative charges, or electrons, available for conduction, and the other positive charges, or holes. When a voltage is applied, an electron and a hole can meet at the junction between the two, and a photon (light particle) is emitted.

The desired properties of a semiconductor layer are achieved by growing a crystalline film of a particular material and adding small quantities of an ‘impurity’ element, which has more or fewer electrons taking part in the chemical bonding (a process known as ‘doping’). Depending on the number of electrons, these impurities donate an extra positive or negative mobile charge to the material.

The key ingredient for blue LEDs is gallium nitride, a robust material with a large energy separation, or ‘gap’, between electrons and holes – this gap is crucial in tuning the energy of the emitted photons to produce blue light. But while doping to donate mobile negative charges in the substance proved to be easy, donating positive charges failed completely. The breakthrough, which won the Nobel Prize, required doping it with surprisingly large amounts of magnesium.

“While blue LEDs have now been manufactured for over a decade,” says John Buckeridge (UCL Chemistry), lead author of the study, “there has always been a gap in our understanding of how they actually work, and this is where our study comes in. Naïvely, based on what is seen in other common semiconductors such as silicon, you would expect each magnesium atom added to the crystal to donate one hole. But in fact, to donate a single mobile hole in gallium nitride, at least a hundred atoms of magnesium have to be added. It’s technically extremely difficult to manufacture gallium nitride crystals with so much magnesium in them, not to mention that it’s been frustrating for scientists not to understand what the problem was.”

The team’s study, published today in the journal Physical Review Letters, unveils the root of the problem by examining the unusual behaviour of doped gallium nitride at the atomic level using highly sophisticated computer simulations.

“To make an accurate simulation of a defect in a semiconductor such as an impurity, we need the accuracy you get from a quantum mechanical model,” explains David Scanlon (UCL Chemistry), a co-author of the paper. “Such models have been widely applied to the study of perfect crystals, where a small group of atoms form a repeating pattern. Introducing a defect that breaks the pattern presents a conundrum, which required the UK’s largest supercomputer to solve. Indeed, calculations on very large numbers of atoms were therefore necessary but would be prohibitively expensive to treat the system on a purely quantum-mechanical level.”

The team’s solution was to apply an approach pioneered in another piece of Nobel Prize winning research: hybrid quantum and molecular modelling, the subject of 2013’s Nobel Prize in Chemistry. In these models, different parts of a complex chemical system are simulated with different levels of theory.

“The simulation tells us that when you add a magnesium atom, it replaces a gallium atom but does not donate the positive charge to the material, instead keeping it to itself,” says Richard Catlow (UCL Chemistry), one of the study’s co-authors. “In fact, to provide enough energy to release the charge will require heating the material beyond its melting point. Even if it were released, it would knock an atom of nitrogen out of the crystal, and get trapped anyway in the resulting vacancy. Our simulation shows that the behaviour of the semiconductor is much more complex than previously imagined, and finally explains why we need so much magnesium to make blue LEDs successfully.”

The simulations crucially fit a complete set of previously unexplained experimental results involving the behaviour of gallium nitride. Aron Walsh (Bath Chemistry) says “We are now looking forward to the investigations into heavily defective GaN, and alternative doping strategies to improve the efficiency of solid-state lighting”.