Author Archives: jewing

Common thermal considerations in LEDs include test point temperature and thermal power.

One characteristic typically associated with While it’s true that LEDs are cool relative to filaments found in incandescent and halogen lamps, they do generate heat within the semiconductor structure, so the system must be designed in such a way that the heat is safely dissipated. The waste heat white LEDs generate in normal operation can damage both the LED and its phosphor coating (which converts the LED’s native blue color to white) unless it’s properly channeled away from the light source.

A luminaire’s thermal design is specified to support continuous operation without heat damage and oftentimes separates the LEDs from temperature-sensitive electronics, which provides an important advantage over individual LED replacement bulbs.

Test point temperature
Test point temperature (Tc) is one characteristic that plays an important role during integration to determine the amount of heat sinking, or cooling, that the luminaire design requires. In general, the higher the Tc limit compared to worst-case ambient temperature (Ta), the more flexibility a luminaire manufacturer will have in designing or selecting a cooling solution.

The worst-case ambient temperature is usually 40ºC or higher, so a module with a low Tc rating (e.g., 65ºC) doesn’t have much headroom above the already hot ambient temperature. Trying to keep a module at Tc 65ºC when the Ta is 40ºC and dissipating 40W thermal power is very difficult to do with a passive heat sink, so a fan or other active heat sink will likely be required. On the other hand, a module with a Tc rating of 90º C or higher (while still meeting lumen maintenance and warranty specifications) has at least 50º C headroom over the ambient temperature and should be able to make use of a reasonably sized passive heat sink.

However, the higher you can push the test point on the LED module, the smaller the heat sink you need. It’s dependent on the Ta – if the module can’t withstand a high enough maximum temperature, it’s impossible to cool below Ta unless you have a refrigerated system, regardless of the size or effectiveness of the heat sink. Stretching the difference between Tc and Ta as much as possible will give you greater room to deviate from the norm and be creative in your heat sink selection.

From phosphor to where the heat sink is located, Xicato is driving Corrected Cold Phosphor to lower the resistance between the phosphor and the heat sink, without having to cool through the hot LEDs. Today, the module output is at 4000 lumens, which wouldn’t have been possible five years ago.

The bottom-line considerations with respect to test point temperature are really flexibility and cost. If a module with a high Tc rating is chosen, there will be more options for design and cost savings than are provided by a module with a low Tc rating, assuming the same power dissipation.

leds_1
Figure 1: Xicato XSM module family sample passive heat sink matrix showing suitable module usage for a range of thermal classes.

Thermal power
Another key characteristic, thermal power (load) has always been a difficult number to deal with. LED module manufacturers don’t always provide the information required to calculate thermal power because this value can change based on such variables as lumen package, Color Rendering Index (CRI), correlated color temperature (CCT), etc. Cooling solutions are often rated for performance in terms of degrees Celsius per watt, which, unfortunately, necessitates calculating the thermal power.

To address this problem, Xicato has developed a “class system,” through which each module variation is evaluated and assigned a “thermal class.” With this system, determining the appropriate cooling solution is as simple as referencing the thermal class from the module’s data sheet to a matrix of heat sinks. FIGURE 1 is a sample passive heat sink thermal class matrix for the Xicato XSM module family.

Let’s take, as an example, a 1300 lumen module with a thermal class rating of “F.” According to the matrix, for an ambient condition of 40°C, the best choice of heat sink would be one that is 70 mm in diameter and 40 mm tall. Validation testing is still required for each luminaire during the design phase, as variations in trims, optics, and mechanical structures can affect performance. Looking at the example module, if a manufacturer were to design a luminaire around this class “F” heat sink and nine months later a new, higher-flux class “F” module were released, the same luminaire would be able to support the higher-lumen module without the need for additional thermal testing. The thermal-class approach supports good design practice, speeds development and product portfolio expansion, and provides a future-proof approach to thermal design and integration.

leds_table

Most specification sheets cite an electrical requirement for the module and the lumen output. Electrical input is basically the voltage the module will require and the current needed to drive it; the product of these two variables is power. The problem with output is that it’s always displayed in lumens – a lumen is not a measure of power, but rather a unit that quantifies and draws optical response to the eye. It’s calibrated specifically on what the human eye sees, but there’s a quality of brightness that comes into play that can’t easily be tied back to electrical power. There’s no way to figure out exactly how much thermal power is being dissipated by the module – power “in” is measured in electrical energy (voltage × current), while power “out” is non-visible electromagnetic, visible electromagnetic, and thermal power. None of this is shown in datasheets.

This intangible factor creates a challenge – for most customers, a watt is a watt, but in reality, there are thermal watts, electrical watts and optical watts; not all are easily determined. The customer can attempt calculations – e.g., how to cool 10 thermal watts – but the fact is that people don’t generally think that way. Many customers don’t have engineers on staff, and those that do often use rough approximations to determine compatibility.

Xicato has defined modules that go up to Class U. The Tc rating, while independent of module flux package, is interrelated. Class A modules, in general, don’t need a heat sink; lower power modules usually achieve about 300 lumens. On the other hand, an XLM 95 CRI product is a Class U product that requires either a passive heat sink or an active heat sink. Once the module and heat sink have been selected and integrated into the luminaire, the next step is thermal validation, which Xicato performs for the specific fixture utilizing an intensive testing process that includes detailed requirements that must be met by the luminaire maker when submitting a fixture for validation (see Table 1 for a partial summary).

The validation is based not on lumens, but on the thermal class model, and the fixture rating is also based on thermal class, rather than wattage, because watts differ. With this approach, an upgrade can be made easily without having to do any retesting. •


JOHN YRIBERRI, Xicato, is the director of Global Application Engineering, Xicato, Inc., San Jose, CA. John joined Xicato in November of 2007 and was the Project leader for Xicato’s first LED platform- the Xicato Spot Module (XSM).

Power device characterization and reliability testing require instrumentation capable of sourcing higher voltages and more sensitive current measurements than ever before.

Silicon carbide (SiC), gallium nitride (GaN), and similar wide bandgap semiconductor materials offer physical properties superior to those of silicon, which allows for power semiconductor devices based on these materials to withstand high voltages and temperatures. These properties also permit higher frequency response, greater current density and faster switching. These emerging power devices have great potential, but the technologies necessary to create and refine them are less mature than silicon technology. For IC fabricators, this presents significant challenges associated with designing and characterizing these devices, as well as process monitoring and reliability issues.

Before wide bandgap devices can gain commercial acceptance, their reliability must be proven and the demand for higher reliability is growing. The continuous drive for greater power density at the device and package levels creates consequences in terms of higher temperatures and temperature gradients across the package. New application areas often mean more severe ambient conditions. For example, in automotive hybrid traction systems, the temperature of the cooling liquid for the combustion engine may reach up to 120°C. In order to provide sufficient margin, this means the maximum junction temperature (TJMAX) must be increased from 150°C to 175°C. In safety-critical applications such as aircraft, the zero defect concept has been proposed to meet stricter reliability requirements.

HTRB reliability testing
Along with the drain-source voltage (VDS) ramp test, the High Temperature Reverse Bias (HTRB) test is one of the most common reliability tests for power devices. In a VDS ramp test, as the drain-source voltage is stepped from a low voltage to a voltage that’s higher than the rated maximum drain-source voltage, specified device parameters are evaluated. This test is useful for tuning the design and process conditions, as well as verifying that devices deliver the performance specified on their data sheets. For example, Dynamic RDS(ON), monitored using a VDS ramp test, provides a measurement of how much a device’s ON-resistance increases after being subjected to a drain bias. A VDS ramp test offers a quick form of parametric verification; in contrast, an HTRB test evaluates long-term stability under high drain-source bias. HTRB tests are intended to accelerate failure mechanisms that are thermally activated through the use of biased operating conditions. During an HTRB test, the device samples are stressed ator slightly less than the maximum rated reverse breakdown voltage (usually 80 or 100% of VRRM) at an ambient temperature close to their maximum rated junction temperature (TJMAX) over an extended period (usually 1,000 hours).

Because HTRB tests stress the die, they can lead to junction leakage. There can also be parametric changes resulting from the release of ionic impurities onto the die surface, from either the package or the die itself. This test’s high temperature accelerates failure mechanisms according to Arrhenius equation, which states the temperature dependence of reaction rates. Therefore, this simulates a test conducted for a much longer period at a lower temperature. The leakage current is continuously monitored throughout the HTRB test and a fairly constant leakage current is generally required to pass it. Because it combines electrical and thermal stress, this test can be used to check the junction integrity, crystal defects and ionic-contamination level, which can reveal weaknesses or degradation effects in the field depletion structures at the device edges and in the passivation.

Instrument and measurement considerations
Power device characterization and reliability testing require instrumentation capable of sourcing higher voltages and more sensitive current measurements than ever before. During operation, power semiconductor devices undergo both electrical and thermal stress: when in the ON state, they have to pass tens or hundreds of amps with minimal loss (low voltage, high current); when they are OFF, they have to block thousands of volts with minimal leakage currents (high voltage, low current). Additionally, during the switching transient, they are subject to a brief period of both high voltage and high current. The high current experienced during the ON state generates a large amount of heat, which may degrade device reliability if it is not dissipated efficiently.

Reliability tests typically involve high voltages, long test times, and often multiple devices under test (wafer level testing). As a result, to avoid breaking devices, damaging equipment, and losing test data, properly designed test systems and measurement plans are essential. Consider the following factors when configuring test systems and plans for executing VDS ramp and HTRB reliability tests needed for device connections, current limit control, stress control, proper test abort design, and data management.

Device connections: Depending on the number of instruments and devices or the probe card type, various connection schemes can be used to achieve the desired stress configurations. When testing a single device, a user can apply voltage at the drain only for VDS stress and measure, which requires only one source measure unit (SMU) instrument per device. Alternatively, a user can connect each gate and source to a SMU instrument for more control in terms of measuring current at all terminals, extend the range of VDS stress, and set voltage on the gate to simulate a practical circuit situation. For example, to evaluate the device in the OFF state (including HTRB test), the gate-source voltage (VGS) might be set to VGS 0 for a P-channel device, or VGS = 0 for an enhancement mode device. Careful consideration of device connections is essential for multi-device testing. In a vertical device structure, the drain is common; therefore, it is not used for stress sourcing so that stress will not be terminated in case a single device breaks down. Instead, the source and gate are used to control stress.

Current limit control: Current limit should allow for adjustment at breakdown to avoid damage to the probe card and device. The current limit is usually set by estimating the maximum current during the entire stress process, for example, the current at the beginning of the stress. However, when a device breakdown occurs, the current limit should be lowered accordingly to avoid the high level current, which would clamp to the limit, melting the probe card tips and damaging the devices over an extended time. Some modern solutions offer dynamic limit change capabilities, which allow setting a varying current limit for the system’s SMU instruments when applying the voltage. When this function is enabled, the output current is clamped to the limit (compliance value) to prevent damage to the device under test (DUT).

Stress control: The high voltage stress must be well controlled to avoid overstressing the device, which can lead to unexpected device breakdown. Newer systems may offer a “soft bias” function that allows the forced voltage or current to reach the desired value by ramping gradually at the start or the end of the stress, or when aborting the test, instead of changing suddenly. This helps to prevent in-rush currents and unexpected device breakdowns. In addition, it serves as a timing control over the process of applying stress.

Proper test abort design: The test program must be designed in a way that allows the user to abort the test (that is, terminate the test early) without losing the data already acquired. Test configurations with a “soft abort” function offer the advantage that test data will not be lost at the termination of the test program, which is especially useful for those users who do not want to continue the test as planned. For instance, imagine that 20 devices are being evaluated over the course of 10 hours in a breakdown test and one of the tested devices exhibits abnormal behavior (such as substantial leakage current). Typically, that user will want to stop the test and redesign the test plan without losing the data already acquired.

Data management: Reliability tests can run over many hours, days, or weeks, and have the potential to amass enormous datasets, especially when testing multiple sites. Rather than collecting all the data produced, systems with data compression functions allow logging only the data important to that particular work. The user can choose when to start data compression and how the data will be recorded. For example, data points can be logged when the current shift exceeds a specified percentage as compared to previously logged data and when the current is higher than a specified noise level.

A comprehensive hardware and software solution is essential to address these test considerations effectively, ideally one that supports high power semiconductor characterization at the device, wafer and cassette levels. The measurement considerations described above, although very important, are too often left unaddressed in commercial software implementations. The software should also offer sufficient flexibility to allow users to switch easily between manual operation for lab use and fully automated operation for production settings, using the same test plan. It should also be compatible with a variety of sourcing and measurement hardware, typically various models of SMU instruments equipped with sufficient dynamic range to address the application’s high power testing levels.

With the right programming environment, system designers can readily configure test systems with anything from a few instruments on a benchtop to an integrated, fully automated rack of instruments on a production floor, complete with standard automatic probers. For example, Keithley’s Automated Characterization Suite (ACS) integrated test plan and wafer description function allow setting up single or multiple test plans on one wafer and selectively executing them later, either manually or automatically. This test environment is compatible with many advanced SMU instruments, including low current SMU instruments capable of sourcing up to 200V and measuring with 0.1fA resolution and high power SMU instruments capable of sourcing up to 3kV and measuring with 1fA resolution.

reliability_1
Figure 1: Example of a stress vs. time diagram for Vds_Vramp test for a single device and the associated device connection. Drain, gate and source are each connected to an SMU instrument respectively. The drain is used for VDS stress and measure; the VDS range is extended by a positive bias on drain and a negative bias on source. A soft bias (gradual change of stress) is enabled at the beginning and end of the stress (initial bias and post bias). Measurements are performed at the “x” points.

The test development environment includes a VDS breakdown test module that’s designed to apply two different stress tests across the drain and source of the MOSFET structure (or across the collector and emitter of an IGBT) for VDS ramp and HTRB reliability assessment.

Vds_Vramp – This test sequence is useful for evaluating the effect of a drain-source bias on the device’s parameters and offers a quick method of parametric verification (FIGURE 1). It has three stages: optional pre-test, main stress-measure, and optional post-test. During the pre-test, a constant voltage is applied to verify the initial integrity of the body diode of the MOSFET; if the body diode is determined to be good, the test proceeds to the main stress-measure stage. Starting at a lower level, the drain-source voltage stress is applied to the device and ramps linearly to a point higher than the rated maximum voltage or until the user-specified breakdown criteria is reached. If the tested device is not broken at the main stress stage, the test proceeds to the next step, the post-test, in which a constant voltage is applied to evaluate the state of the device, similar to the pre-test. The measurements throughout the test sequence are made at both source and gate for multi-device testing (or drain for the single device case) and the breakdown criteria will be based on the current measured at source (or drain for a single device).

Vds_Constant –This test sequence can be set up for reliability testing over an extended period and at elevated temperature, such as an HTRB test (FIGURE 2). The Vds_Constant test sequence has a structure similar to that of the Vds_Vramp with a constant voltage stress applied to the device during the stress stage and different breakdown settings. The stability of the leakage current (IDSS) is monitored throughout the test.

FIGURE 3. Example of stress vs. time diagram for Vds_Constant test sequence for vertical structure and multi-device case and the associated device connection. Common drain, gate and source are each connected to an SMU instrument respectively. The source is used for VDS stress and measure; the VDS range is extended by a positive bias on the drain and a negative bias on the source. A soft bias (gradual change of stress) is enabled at the beginning and end of the stress (initial bias and post bias). Measurements are performed at the “x” points.

FIGURE 3. Example of stress vs. time diagram for Vds_Constant test sequence for vertical structure and multi-device case and the associated device connection. Common drain, gate and source are each connected to an SMU instrument respectively. The source is used for VDS stress and measure; the VDS range is extended by a positive bias on the drain and a negative bias on the source. A soft bias (gradual change of stress) is enabled at the beginning and end of the stress (initial bias and post bias). Measurements are performed at the “x” points.

Conclusion

HTRB testing offers wide bandgap device developers invaluable insights into the long-term reliability and performance of their designs. •


LISHAN WENG is an applications engineer at Keithley Instruments, Inc. in Cleveland, Ohio, which is part of the Tektronix test and measurement portfolio. [email protected].

A wide array of package level integration technologies now available to chip and system designers are reviewed.

As technical challenges to shrink transistors per Moore’s Law become increasingly harder and costlier to overcome, fewer semiconductor manufacturers are able to upgrade to the next lower process nodes (e.g., 20nm). Therefore various alternative schemes to cram more transistors within a given footprint without having to shrink individual devices are being pursued actively. Many of these involve 3D stacking to reduce both footprint and the length of interconnect between the devices.

A leading memory manufacturer has just announced 3D NAND products where circuitry are fabricated one over the other on the same wafer resulting in higher device density on an area basis without having to develop smaller transistors. However such integration may not be readily feasible when irregular non-memory structures, such as sensors and CPUs, are to be integrated in 3D. Similar limits would also apply for 3D integration of devices that require very different process flows, such as analog with digital processor and memory.

For applications where integration of chips with such heterogeneous designs and processes are required, integration at the package level becomes a viable alternative. For package level integration, 3D stacking of individual chips is the ultimate configuration in terms of reducing footprint and improving performance by shrinking interconnect length between individual chips in the stack. Such packages are already in mass production for camera modules that require tight coupling of the image sensor to a signal processor. Other applications, such as 3D stacks of DRAM chips and CPU/memory stacks, are under development. For these applications 3D modules have been chosen so as to reduce not just the form factor but also the length of interconnects between individual chips.

integration_fig1
Figure 1: Equivalent circuit for interconnect between DRAM and SoC chips in a PoP package.

Interconnects a necessary evil

To a chip or system designer the interconnect between transistors or the wiring between chips is a necessary evil. They introduce parasitic R, L and C into the signal path. For die level interconnects this problem became recognized at least two decades ago as RC delay in such interconnects for CPUs became a roadblock to operation over 2GHz. This prompted major changes in materials for wafer level interconnects. For the conductors, the shift was from aluminum to lower resistance copper which enabled a shrink in geometries. For the surrounding interlayer dielectric that affect the parasitic capacitance, silicon dioxide was replaced by various low and even ultra low k ( dielectric constant ) materials, in spite of their poorer mechanical properties. Similar changes were made even earlier in the chip packaging arena when ceramic substrates were replaced by lower– k organic substrates that also reduced costs. Interconnects in packages and PCBs too introduce parasitic capacitance that contributes to signal distortion and may limit the maximum bandwidth possible. Power lost to parasitic capacitance of interconnects while transmitting digital signals through them depend linearly on the capacitance as well as the bandwidth. With the rise in bandwidth even in battery driven consumer electronics, such as smart phones, power loss in the package or PCBs becomes ever more significant (30%) as losses in chips themselves are reduced through better design (e.g., ESD structures with lower capacitance ).

Improving the performance of package level interconnects

Over a decade ago the chip packaging world went through a round of reducing the interconnect length and increasing interconnect density when for high performance chips such as CPUs, traditional peripheral wirebond technology was replaced by solder-bumped area-array flip chip technology. The interconnect length was reduced by at least an order of magnitude with a corresponding reduction in the parasitics and rise in the bandwidth for data transfer to adjacent chips, such as the DRAM cache. However, this improvement in electrical performance came at the expense of mechanical complications as the tighter coupling of the silicon chip to a substrate with a much larger coefficient of thermal expansion (6-10X of Si ) exposed the solder bump interconnects between them to cyclic stress and transmitted some stress to the chip itself. The resulting Chip Package Interaction (CPI) gets worse with larger chips and weaker low-k dielectrics on the chip.

The latest innovation in chip packaging technology is 3D stacking with through silicon vias (TSVs) where numerous vias (5µm in diameter and getting smaller) are etched in the silicon wafer and filled with a conductive metal, such as Cu or W. The wafers or singulated chips are then stacked vertically and bonded to one another. 3D stacking with TSVs provides the shortest interconnect length between chips in the stack, with improvements in bandwidth, efficiency of power required to transmit data, and footprint. However, as we shall see later, the 3D TSV technology is delayed not only because of complex logistics issues that are often discussed, but actual technical issues rooted in choices made for the most common variant: TSVs filled by Cu, with parallel wafer thinning.

integration_fig2
Figure 2: Breakdown of capacitance contributions from various elements of intra-package interconnect in a PoP. The total may exceed 2 pF.

Equivalent circuit for packages

PoP (package-on-package) is a pseudo-3D package using current non-TSV technologies and are ubiquitous in SmartPhones. In a PoP, two packages (DRAM and SoC) are stacked over one another and connected vertically by peripheral solder balls or columns. The PoP package is often talked about as a target for replacement by TSV-based 3D stacks. The SoC to DRAM interconnect in the PoP has 4 separate elements (wirebond in DRAM package, vertical interconnect between the top and bottom packages, substrate trace and flip chip in bottom package for SoC) in series. The equivalent circuit for package level interconnect in a typical PoP is shown in FIGURE 1.

From FIGURE 2 it is seen that interconnect capacitance in a PoP package is dominated by not just wire bonds (DRAM) but the lateral traces in the substrate of the flip chip package (SoC) as well. Both of these large contributions are eliminated in a TSV based 3D stack.

In a 3D package using TSVs the elimination of substrate traces and wire bonds between the CPU and DRAM leads to a 75% reduction in interconnect capacitance (FIGURE 3) with consequent improvement in maximum bandwidth and power efficiency.

Effect of parasitics

Not only do interconnect parasitics cause power loss during data transmission but they also affect the waveform of the digital signal. For chips with a given input/output buffer characteristics, higher capacitance slows down the rise and falling edges [1,2]. Inductance causes more noise and constricts the eye diagram. So higher interconnect parasitics limit the maximum bandwidth for error free data transmission through a package or PCB.

TSV-based 3D stacking

As has been previously stated, a major reason for developing TSV technology is to use it to improve data transmission – measured by bandwidth and power efficiency — between chips and go beyond bandwidth limits imposed by conventional interconnect. Recently a national Lab in western Europe has reported results [3] of stacking a single DRAM chip to a purpose-designed SoC with TSVs in a 4 x 128 bit wide I/O format and at a clock rate of just 200MHz. They were able to demonstrate a bandwidth of 12.8 MB/sec (2X that in a PoP with LP DDR3 running at 800MHz). Not surprisingly the power efficiency for data transfer reported (0.9 pJ/bit) was only a quarter of that for the PoP case.

Despite a string of encouraging results over the last three years from several such test vehicles, TSV-based 3D stacking technology is not yet mature for volume production. This is true for the TSV and manufacturing technology chosen by a majority of developers, namely filling the TSVs with copper and thinning the wafers in parallel but separately which requires bonding/debonding to carrier wafers. The problems with filling the TSVs with copper have been apparent for several years and affect electrical design [4]. The problem arises from the large thermal expansion mismatch between copper and silicon and the stress caused by it in the area surrounding copper-filled TSVs, which alters electron mobility and circuit performance. The immediate solution is to maintain keep-out zones around the TSVs, however this affects routing and the length of on-die interconnect. Since the stress field around copper-filled TSVs depend on the square of the via diameter, smaller diameter TSVs are now being developed to shrink the keep out zone.

Only now the problems of debonding thinned wafers with TSVs, such as fracturing, and subsequent handling are being addressed by development of new adhesive materials that can be depolymerized by laser and thinned wafers removed from the back-up without stress.

The above problems were studied and avoided by the pioneering manufacturer of 3D memory stacks. They changed via fill material from copper to tungsten, which has a small CTE mismatch with copper, and opted for a sequential bond/thin process for stacked wafers thereby totally avoiding any issues from bond/debond or thin wafer handling.

It is baffling why such alternative materials and process flows for TSVs are not being pursued even by U.S. based foundries that seem to take their technical cues instead from a national laboratory in a small European nation with no commercial production of semiconductors!

integration_fig3
Figure 3: When TSVs (labeled VI) replace the conventional interconnect in a PoP package, the parasitic capacitance of interconnect between chips, such as SoC and DRAM, is reduced by 75%.

Options for CPU to memory integration

Given the delay in getting 3D TSV technology ready at foundries, it is normal that alternatives like 2.5D, such as planar MCMs on high density silicon substrates with TSVs, have garnered a lot of attention. However the additional cost of the silicon substrate in 2.5D must be justified from a performance and/or foot-print standpoint. Interconnect parasitics due to wiring between two adjacent chips in a 2.5D module are significantly smaller than that in a system built on PCBs with packaged chips. But they are orders of magnitude larger than what is possible in a true 3D stack with TSVs. Therefore building a 2.5D module of CPU and an adjacent stack of memory chips with TSVs would reduce the size and cost of the silicon substrate but won’t deliver performance anywhere near an all TSV 3D stack of CPU and memory.

integration_table

Alternatives to TSVs for package level integration

Integrating a non-custom CPU to memory chips in a 3D stack would require the addition of redistribution layers with consequent increase in interconnection length and degradation of performance. In such cases it may be preferable to avoid adding TSVs to the CPU chips altogether and integrate the CPU to a 3D memory stack via a substrate in a double-sided package configuration. The substrate used is silicon with TSVs and high-density interconnects. Test vehicles for such an integration scheme have been built and electrical parameters evaluated [5,6]. For cost driven applications e,g. Smart Phones the cost of large silicon substrates used above may be prohibitive and the conventional PoP package may need to be upgraded. One approach to do so is to shrink the pitch of the vertical interconnects between the top and bottom packages and quadruple the number of these interconnects and the width of the memory bus [7,8]. While this mechanical approach would allow an increase in the bandwidth, unlike TSV based solutions they would not reduce the I/O power consumption as nothing is done to reduce the parasitic capacitance of the interconnect previously discussed (FIGURE 3).

A novel concept of “Active Interconnects” has been proposed and developed at APSTL. This concept employs a more electrical approach to equal the performance of TSVs [1] and replace these mechanically complex intrusions into live silicon chips. Compensation circuits on additional ICs are inserted into the interconnect path of a conventional PoP package for a Smart Phone (FIGURE 4) to create the SuperPoP package with Bandwidth and Power efficiency to approach that of TSV-based 3D stacks without having to insert any troublesome TSVs into the active chips themselves.

integration_fig4
Figure 4: Cross-section of a APSTL Super POP package under development to equal performance of TSV based 3D stacks. Integrated circuit with compensation circuits for ea. interconnect is inserted between the two layers of a PoP for SmartPhones. This chip contains through vias and avoids insertion of TSVs in high value dice for SoC or DRAM.

Conclusion
A wide array of package level integration technologies now available to chip and system designers have been discussed. The performance of package level interconnect has become ever more important for system performance in terms of bandwidth and power efficiency. The traditional approach of improving package electrical performance by shrinking interconnect length and increasing their density continues with the latest iteration, namely TSVs. Like previous innovations, TSVs too suffer from mechanical complications, only now more magnified due to stress effects of TSVs on device performance. Further development of TSV technology must not only solve all remaining problems of the current mainstream technology – including Cu-filled vias and parallel thinning of wafers — but also simplify the process where possible. This includes adopting more successful material (Cu-capped W vias) and process choices (sequential wafer bond and thin) already in production. In the meantime innovative concepts like Active Interconnect that altogether avoids using TSVs and APSTL SuperPoP using this concept show promise for cost-driven power-sensitive applications like smart phones. •

References
Gupta, D., “A novel non-TSV approach to enhancing the bandwidth in 3D packages for processor- memory modules “, IEEE ECTC 2013, pp 124 – 128.

Karim, M. et al , “Power Comparison of 2D, 3D and 2.5D Interconnect Solutions and Power Optimization of Interposer Interconnects,” IEEE ECTC 2013, pp 860 – 866.

Dutoit, D. et al, “A 0.9 pJ/bit, 12.8 GByte/s WideIO Memory Interface in a 3D-IC NoC-based MPSoC,” 2013 Symposium on VLSI Circuits Digest of Technical Papers.

Yang, J-S et al, “TSV Stress Aware Timing Analysis with Applications to 3D-IC Layout Optimization,” Design Automation Conference (DAC), 2010 47th ACM/IEEE , June 2010.

Tzeng, P-J. et al, “Process Integration of 3D Si Interposer with Double-Sided Active Chip Attachments,” IEEE ECTC 2013, pp 86 – 93.

Beyene, W. et al, “Signal and Power Integrity Analysis of a 256-GB/s Double-Sided IC Package with a Memory Controller and 3D Stacked DRAM,” IEEE ECTC 2013, pp 13 – 21.

Mohammed, I. et al, “Package-on-Package with Very Fine Pitch Interconnects for High Bandwidth,” IEEE ECTC 2013, pp 923 – 928

Hu, D.C., “A PoP Structure to Support I/O over 1000,” ECTC IEEE 2013, pp 412 – 416


DEV GUPTA is the CTO of APSTL, Scottsdale, AZ ([email protected]).

Inside the Hybrid Memory Cube


September 18, 2013

The HMC provides a breakthrough solution that delivers unmatched performance with the utmost reliability.

Since the beginning of the computing era, memory technology has struggled to keep pace with CPUs. In the mid 1970s, CPU design and semiconductor manufacturing processes began to advance rapidly. CPUs have used these advances to increase core clock frequencies and transistor counts. Conversely, DRAM manufacturers have primarily used the advancements in process technology to rapidly and consistently scale DRAM capacity. But as more transistors were added to systems to increase performance, the memory industry was unable to keep pace in terms of designing memory systems capable of supporting these new architectures. In fact, the number of memory controllers per core decreased with each passing generation, increasing the burden on memory systems.

To address this challenge, in 2006 Micron tasked internal teams to look beyond memory performance. Their goal was to consider overall system-level requirements, with the goal of creating a balanced architecture for higher system level performance with more capable memory and I/O systems. The Hybrid Memory Cube (HMC), which blends the best of logic and DRAM processes into a heterogeneous 3D package, is the result of this effort. At its foundation is a small logic layer that sits below vertical stacks of DRAM die connected by through-silicon -vias (TSVs), as depicted in FIGURE 1. An energy-optimized DRAM array provides access to memory bits via the internal logic layer and TSV – resulting in an intelligent memory device, optimized for performance and efficiency.

By placing intelligent memory on the same substrate as the processing unit, each system can do what it’s designed to do more efficiently than previous technologies. Specifically, processors can make use of all of their computational capability without being limited by the memory channel. The logic die, with high-performance transistors, is responsible for DRAM sequencing, refresh, data routing, error correction, and high-speed interconnect to the host. HMC’s abstracted memory decouples the memory interface from the underlying memory technology and allows memory systems with different characteristics to use a common interface. Memory abstraction insulates designers from the difficult parts of memory control, such as error correction, resiliency and refresh, while allowing them to take advantage of memory features such as performance and non-volatility. Because HMC supports up to 160 GB/s of sustained memory bandwidth, the biggest question becomes, “How fast do you want to run the interface?”

The HMC Consortium
A radically new technology like HMC requires a broad ecosystem of support for mainstream adoption. To address this challenge, Micron, Samsung, Altera, Open-Silicon, and Xilinx, collaborated to form the HMC Consortium (HMCC), which was officially launched in October, 2011. The Consortium’s goals included pulling together a wide range of OEMs, enablers, and tool vendors to work together to define an industry-adoptable serial interface specification for HMC. The consortium delivered on this goal within 17 months and introduced the world’s first HMC interface and protocol specification in April 2013.
The specification provides a short-reach (SR), very short-reach (VSR), and ultra short-reach (USR) interconnection across physical layers (PHYs) for applications requiring tightly coupled or close proximity memory support for FPGAs, ASICs and ASSPs, such as high-performance networking and computing along with test and measurement equipment.

3Dintegration_fig1
FIGURE 1. The HMC employs a small logic layer that sits below vertical stacks of DRAM die connected by through-silicon-vias (TSVs).

The next goal for the consortium is to develop a second set of standards designed to increase data rate speeds. This next specification, which is expected to gain consortium agreement by 1Q14, shows SR speeds improving from 15 Gb/s to 28 Gb/s and VSR/USR interconnection speeds increasing from 10 to 15–28 Gb/s.

Architecture and Performance

Other elements that separate HMC from traditional memories include raw performance, simplified board routing, and unmatched RAS features. Unique DRAM within the HMC device are designed to support sixteen individual and self-supporting vaults. Each vault delivers 10 GB/s of sustained memory bandwidth for an aggregate cube bandwidth of 160 GB/s. Within each vault there are two banks per DRAM layer for a total of 128 banks in a 2GB device or 256 banks in a 4GB device. Impact on system performance is significant, with lower queue delays and greater availability of data responses compared to conventional memories that run banks in lock-step. Not only is there massive parallelism, but HMC supports atomics that reduce external traffic and offload remedial tasks from the processor.

As previously mentioned, the abstracted interface is memory-agnostic and uses high-speed serial buses based on the HMCC protocol standard. Within this uncomplicated protocol, commands such as 128-byte WRITE (WR128), 64-byte READ (RD64), or dual 8-byte ADD IMMEDIATE (2ADD8), can be randomly mixed. This interface enables bandwidth and power scaling to suit practically any design—from “near memory,” mounted immediately adjacent to the CPU, to “far memory,” where HMC devices may be chained together in futuristic mesh-type networks. A near memory configuration is shown in FIGURE 2, and a far memory configuration is shown in FIGURE 3. JTAG and I2C sideband channels are also supported for optimization of device configuration, testing, and real-time monitors.

HMC board routing uses inexpensive, standard high-volume interconnect technologies, routes without complex timing relationships to other signals, and has significantly fewer signals. In fact, 160GB/s of sustained memory bandwidth is achieved using only 262 active signals (66 signals for a single link of up to 60GB/s of memory bandwidth).

3Dintegration_fig2
FIGURE 2. The HMC communicates with the CPU using a protocol defined by the HMC consortium. A near memory configuration is shown.
3Dintegration_fig3
FIGURE 3.A far memory communication configuration.

FIGURE 2. The HMC communicates with the CPU using a protocol defined by the HMC consortium. A near memory configuration is shown.

A single robust HMC package includes the memory, memory controller, and abstracted interface. This enables vault-controller parity and ECC correction with data scrubbing that is invisible to the user; self-correcting in-system lifetime memory repair; extensive device health-monitoring capabilities; and real-time status reporting. HMC also features a highly reliable external serializer/deserializer (SERDES) interface with exceptional low-bit error rates (BER) that support cyclic redundancy check (CRC) and packet retry.

HMC will deliver 160 GB/s of bandwidth or a 15X improvement compared to a DDR3-1333 module running at 10.66 GB/s. With energy efficiency measured in pico-joules per bit, HMC is targeted to operate in the 20 pj/b range. Compared to DDR3-1333 modules that operate at about 60 pj/b, this represents a 70% improvement in efficiency. HMC also features an almost-90% pin count reduction—66 pins for HMC versus ~600 pins for a 4-channel DDR3 solution. Given these comparisons, it’s easy to see the significant gains in performance and the huge savings in both the footprint and power usage.

Market Potential

HMC will enable new levels of performance in applications ranging from large-scale core and leading-edge networking systems, to high-performance computing, industrial automation, and eventually, consumer products.

Embedded applications will benefit greatly from high-bandwidth and energy-efficient HMC devices, especially applications such as testing and measurement equipment and networking equipment that utilizes ASICs, ASSPs, and FPGA devices from both Xilinx and Altera, two Developer members of the HMC Consortium. Altera announced in September that it has demonstrated interoperability of its Stratix FPGAs with HMC to benefit next-generation designs.

According to research analysts at Yole Développement Group, TSV-enabled devices are projected to account for nearly $40B by 2017—which is 10% of the global chip business. To drive that growth, this segment will rely on leading technologies like HMC.

3Dintegration_fig4
FIGURE 4.Engineering samples are set to debut in 2013, but 4GB production in 2014.

Production schedule
Micron is working closely with several customers to enable a variety of applications with HMC. HMC engineering samples of a 4 link 31X31X4mm package are expected later this year, with volume production beginning the first half of 2014. Micron’s 4GB HMC is also targeted for production in 2014.

Future stacks, multiple memories
Moving forward, we will see HMC technology evolve as volume production reduces costs for TSVs and HMC enters markets where traditional DDR-type of memory has resided. Beyond DDR4, we see this class of memory technology becoming mainstream, not only because of its extreme performance, but because of its ability to overcome the effects of process scaling as seen in the NAND industry. HMC Gen3 is on the horizon, with a performance target of 320 GB/s and an 8GB density. A packaged HMC is shown in FIGURE 4.

Among the benefits of this architectural breakthrough is the future ability to stack multiple memories onto one chip. •


THOMAS KINSLEY is a Memory Development Engineer and ARON LUNDE is the Product Program Manager at Micron Technology, Inc., Boise, ID.

Packaging at The ConFab


September 18, 2013

At The ConFab conference in Las Vegas in June, Mike Ma, VP of Corporate R&D at Siliconware (SPIL), announced a new business model for interposer based SiP’s, namely the “turnkey OSAT model.” In his presentation “The expanding Role of OSATS in the Era of System Integration,” Ma looked at the obstacles to 2.5/3D implementation and came up with the conclusion that cost is still a significant deterrent to all segments.

By Dr. Phil Garrou, Contributing Editor

Over the past few years, TSMC has been proposing a turnkey foundry model which has met with significant resistance from their IC customers. Under the foundry turnkey model, the foundry handles all operations including chip fabrication, interposer fabrication, assembly and test. Foundry rivals UMC and GlobalFoundries, have been supporting an OSAT/Foundry collaboration model where the foundries would fabricate the chips with TSV and the OSATs would do assembly of chips and interposers that could come from several different sources.

packaging
FIGURE 1. Amkor’s “possum” stacking technology.

SPIL is the first OSAT to propose this OSAT centric model where the interposer is fabricated by the OSAT who then assembles and tests modules made with chips from multiple sources. The impediment to this route in the past has been the lack of OSAT capability to fabricate the fine pitch interposers which require dual damascene processing capability, which until now was only available in the foundries. This week SPIL announced the equipment for fine pitch interposer capability (>2 layers, 0.4-3µm metal line width and 0.5µm TSV) has been purchased and is in place.

Ma indicates that while the foundries are not happy with this SPIL proposal, their customers, especially their fabless customers have been very supportive. He feels the inherent lower cost structure of OSATS will have a positive impact on the 2.5/3D market which has been somewhat stagnant since the FPGA and memory product announcements in 2010.

Also presenting at The ConFab: Bob Lanzone, Senior VP of Engineering Solutions for Amkor. He, like the other OSATS, sees smartphones and tablets driving the market moving forward.

Amkor’s update on Copper Pillar technology indicates an expected doubling in demand this year and continued expansion into “all flip chip products”. Their “TSV status” takes credit for being the first into production with TSMC and Xilinx.

Looking at the 2.5D TSV and interposer supply chain they see different requirements for high end, mid-range and lower cost products. For high end, such as networking and servers, silicon interposers are needed with < 2µm L/S, 25k μbumps per die. Amkor is engaged with foundries to deliver silicon interposers today.

For mid-range products, such as gaming, graphics, HDTV, and tablets, silicon or Glass interposers are need with < 3µm L/S, < 25ns latency and ~10k μbumps/die. Amkor is not actively pursuing glass interposers yet as the infrastructure is still immature.

For lower cost products, such as lower end tablets and smart phones, silicon, glass or laminate interposers are needed, with < 8um L/S, low resistance and ~2k μbumps per die. Lazone said a cost reduction path must be provided to enable this sector, and they are working with the laminate supply chain to do that. They are targeting 2014 for their “possum” stacking as shown in FIGURE 1.

Crossbar, Inc., a start-up company, unveiled a new Resistive RAM (RRAM) technology that will be capable of storing up to one terabyte (TB) of data on a single 200mm2 chip. A working memory was produced array at a commercial fab, and Crossbar is entering the first phase of productization. “We have achieved all the major technical milestones that prove our RRAM technology is easy to manufacture and ready for commercialization,” said George Minassian, chief executive officer, Crossbar, Inc. The company is backed by Artiman Ventures, Kleiner Perkins Caufield & Byers and Northern Light Venture Capital.

The technology, which was conceived by Professor Wei Lu of the University of Michigan, is based on a simple three-layer structure of silver, amorphous silicon and silicon (FIGURE 1). The resistance switching mechanism is based on the formation of a filament in the switching material when a voltage is applied between the two electrodes. Minassian said the RRAM is very stable, capable of withstanding temperature swings up to 125°C, with up to 10,000 cycles, and a retention of 10 years. “The filaments are rock solid,” he said.

Crossbar has filed 100 unique patents, with 30 already issued, relating to the development, commercialization and manufacturing of RRAM technology.

After completing the technology transfer to Crossbar’s R&D fab and technology analysis and optimization, Crossbar has now successfully developed its demonstration product in a commercial fab. This working silicon is a fully integrated monolithic CMOS controller and memory array chip. The company is currently completing the characterization and optimization of this device and plans to bring its first product to market in the embedded SOC market.

Sherry Garber, Founding Partner, Convergent Semiconductors, said: “RRAM is widely considered the obvious leader in the battle for a next generation memory and Crossbar is the company most advanced to show working demo that proves the manufacturability of RRAM. This is a significant development in the industry, as it provides a clear path to commercialization of a new storage technology, capable of changing the future landscape of electronics innovation.”

crossbar
FIGURE 1. The resistance switching mechanism of Crossbar’s technology is based on the formation of a
filament in the silicon-based switching material when a voltage is applied between the two electrodes.

Crossbar technology can be stacked in 3D, delivering multiple terabytes of storage on a single chip. Its simplicity, stackability and CMOS compatibility enables logic and memory to be integrated onto a single chip at the latest technology node (FIGURE 2).

Crossbar’s technology will deliver 20x faster write performance; 20x lower power consumption; and 10x the endurance at half the die size, compared to today’s best-in-class NAND Flash memory. Minassian said the biggest advantage of the technology is its simplicity. “That allowed us in three years time to get from technology understanding, characterization, cell array and put a device together,” he said.

Minassian said RRAM compares favorably with NAND, which is getting more complex and expensive. “In 3D NAND, you put all of these thing layers of top of each other – 32 layers, or 64 or 128 in some cases – then you have to etch them, you have to slice them all at once and the equipment required for that accuracy and that geometry is very expensive. This is one of the reasons that 3D has been very difficult for NAND to be introduced.” With the Crossbar approach, “you’re always dealing with three layers. It’s much easier to stack these and it gives you a huge density advantage,” Minassian said.

“The switching media is highly resistive,” explains Minassian. “If you try to read the resistance between top and bottom electrode without doing anything, it’s a high resistance. That’s the off state. To turn on the device, we apply a positive voltage to the top electrode. That ionizes the metal on the top layer and puts the metal ions into the switching media. The metal ions form a filament that connect the top and bottom electrode. The moment they hit the bottom electrode, you have a short, which means that the top and bottom electrode are connected which means they have a low resistance.” The low resistance state is the on state. He said that although silver is not commonly used in front-end CMOS processing, the RRAM memory formation process is a back-end process. “You produce all your CMOS and then right before the device exits the fab, you put the silver on top,” he said. The silver is deposited, encapsulated, etched and then packaged. “That equipment is available, you just have to isolate it at the end,” Minassian said.

crossbar_2
FIGURE 2. Crossbar’s simple and scalable memory cell structure enables a new class of 3D RRAM which can be incorporated into the back end of line of any standard CMOS manufacturing fab.

The approach is also CMOS compatible, with processes used to fabricate the memory layers all running at less than 400°C. “This allows you to not only be CMOS compatible, but it allows you to stack more and more of these memory layers on top of each other,” Minassian said. “You can put the logic, the controllers and microprocessors, next to the memory in the same die. That allows you to simplify packaging and increase performance.”

Another advantage compared to NAND is that the controllers used to address the cells can be less complicated. Minassian said that in conventional cells, 30 electrons are required to produce 1 Volt. “If you shrink that to a smaller node, the number of electrons is less. Fewer electrons are much harder to detect. You need a massive controller that does error recovery and complex coding so if the bits are changed, it can still provide you the right program to execute.” Also, because the Crossbar RRAM is capable of 10,000 write cycles, less complicated controllers are needed. Today’s NAND is capable of only 1000 write cycles. “If you write information 1000 times, that cell is destroyed. It will not contain or maintain the information. You have this complex controller that keeps track of how many cells have been written, how many times, to make sure all of them are aged equally,” Minassian said.

Non-volatile memory, expected to grow to become a $60 billion market in 2013, is the most common storage technology used for both code storage (NOR) and data storage (NAND) in a wide range of electronics applications. Crossbar plans to bring to market standalone chip solutions, optimized for both code and data storage, used in place of traditional NOR and NAND Flash memory. Crossbar also plans to license its technology to SOC developers for integration into next-generation systems-on-chips (SOC).

Michael Yang, Senior Principal Analyst, Memory and Storage, IHS, said: “Ninety percent of the data we store today was created in the past two years. The creation and instant access of data has become an integral part of the modern experience, continuing to drive dramatic growth for storage for the foreseeable future. However, the current storage medium, planar NAND, is seeing challenges as it reaches the lower lithographies, pushing against physical and engineering limits. The next generation non-volatile memory, such as Crossbar’s RRAM, would bypass those limits, and provide the performance and capacity necessary to become the replacement memory solution.” •

P. Singer

Edwards, a supplier of vacuum products, abatement systems and related value-added services, and Sweden-based Atlas Copco entered into a definitive merger agreement in a transaction valued at up to approximately $1.6 billion, including the assumption of debt.

Under the terms of the merger agreement, a subsidiary of Atlas Copco will acquire Edwards for a per-share consideration of up to $10.50, which includes a fixed cash payment of $9.25 at closing and an additional payment of up to $1.25 per share post-closing, depending on Edwards’ achievement of 2013 revenue within the range of £587.5 million to £650 million and achievement of a related Adjusted EBITDA1 target within the range of £113.9 million to £145 million. The transaction is expected to close in the first quarter of 2014.

Depending on the amount of any additional payment, the merger consideration represents a premium of approximately 11 percent to 26 percent to Edwards’ 30-day average closing share price of $8.33 up to August 16, 2013, the last trading day prior to this announcement. Edwards priced its initial public offering on The NASDAQ Global Select Market on May 10th 2012 at $8.00 per share.

Edwards’ shareholders representing approximately 84% of the current shares outstanding have entered into voting agreements with Atlas Copco to vote in favor of the merger, subject to the conditions set out in the voting agreements. Further, the Board of Directors of Edwards unanimously recommends the offer to all Edwards shareholders.

Edwards and Atlas Copco have a complementary businesses fit. Both companies share a similar strategic direction, with growth focused on technology leadership and customer service. The benefits of greater scale will help accelerate Edwards’ growth strategy and provide more opportunities for Edwards’ employees. Upon completion of the transaction, a new Vacuum Solutions Division will be formed within the Atlas Copco Compressor Technique business area, with headquarters in Crawley, UK.

news_quaterly

The Semiconductor Industry Association (SIA) announced that worldwide sales of semiconductors reached $74.65 billion during the second quarter of 2013, an increase of 6 percent from the first quarter when sales were $70.45 billion. This marks the largest quarterly increase in three years. Global sales for June 2013 hit $24.88 billion, an increase of 2.1 percent compared to June 2012 and 0.8 percent higher than the May 2013 total. Regionally, sales in the Americas jumped 8.6 percent in Q2 compared to Q1 and 10.6 percent in June 2013 compared to June 2012, marking the region’s largest year-over-year increase of 2013. All monthly sales numbers are compiled by the World Semiconductor Trade Statistics (WSTS) organization and represent a three-month moving average.

“There’s no question the global semiconductor industry has picked up steam through the first half of 2013, led largely by the Americas,” said Brian Toohey, president and CEO, Semiconductor Industry Association. “We have now seen consistent growth on a monthly, quarterly, and year-to-year basis, and sales totals have exceeded the latest industry projection, with sales of memory products showing particular strength.”

Quarterly sales outperformed the World Semiconductor Trade Statistics (WSTS) organization’s latest industry forecast, which projected quarter-over-quarter growth of 4.6 percent globally and 3.4 percent for the Americas (compared to the actual increases of 6 percent and 8.6 percent, respectively). Total year-to-dates sales of $145.1 billion also exceeded the WSTS projection of $144.1 billion. Actual year-to-date sales through June are 1.5 percent higher than they were at the same point in 2012.

Regionally, sales in June increased compared to May in the Americas (3.5 percent), Asia Pacific (0.4 percent), and Europe (0.1 percent), but declined slightly in Japan (-0.9 percent). Compared to the same month in 2012, sales in June increased substantially in the Americas (10.6 percent), moderately in Asia Pacific (5.4 percent), and slightly in Europe (0.8 percent), but dropped steeply in Japan (-20.8 percent), largely due to the devaluation of the Japanese yen.

“While we welcome this encouraging data, it is important to recognize the semiconductor workforce that drives innovation and growth in our industry,” continued Toohey. “A key roadblock inhibiting our innovation potential is America’s outdated high-skilled immigration system, which limits semiconductor companies’ access to the world’s top talent. The House of Representatives should use the August recess to work out their political differences on this issue and return to Washington next month ready to approve meaningful immigration reform legislation.” •

Light and proximity sensors in mobile handsets and tablets are set for expansive double-digit growth within a five-year period, thanks to increasing usage by electronic giants Samsung and Apple. Light and proximity sensors can detect a user’s presence as well as help optimize display brightness and color rendering.

Revenue for the sensors is forecast to reach $782.2 million this year, up a prominent 41 percent from $555.1 million in 2012, according to insights from the MEMS and Sensors Service at information and analytics provider IHS. The market is also expected to grow in the double digits for the next three years before moderating to a still-robust eight percent in 2017. By then, revenue will reach $1.3 billion, as shown in the figure on page 10.

“The continued growth of the smartphone and tablet markets serve as the foundation of a bright future for light sensors,” said Marwan Boustany, senior analyst for MEMS & sensors at IHS. “Market leaders in these areas are driving the growth, with Apple pioneering their adoption and Samsung later taking the lead in their usage.”

drivenbyapple

Sensor segments
There are three types of light and proximity sensors: ambient light sensors (ALS) that measure the intensity of the surrounding light enveloping a cellphone or tablet to adjust screen brightness and save battery power; RGB sensors that measure a room’s color temperature via the red, green and blue wavelengths of light to help correct white balance in the device display; and proximity sensors that disable a handset’s touch screen when it is held close to the head, in order to avoid unwanted input, and also to turn off the light in the display to save battery power.

Overall, the compound annual growth rate for the sensors from 2012 to 2017 equates to 19 percent.

Driving this growth is the shift in use from ALS to RGB in mid- to high-end smartphones; the growing deployment of proximity sensors with gesture capabilities compared to just simple proximity sensors; and the price premiums associated with such changes in usage.

Aside from their most conspicuous use in wireless communications typified by handsets and tablets, light sensors are also utilized in various other applications. These include consumer electronics and data processing for devices like televisions, laptops and PC tablets; the industrial market for home automation, medical electronics and general lighting; and the automotive space for vehicle displays and car functionalities like rain sensors.

Samsung and Apple are leaders in sensor use
Both Samsung and Apple have made use of light and proximity sensors in recent years, helping the sensor market grow in no small measure.

In 2010, Apple included an RGB and proximity sensor for its iPhone 4 and an RGB sensor in its iPad, even though the sensors were subsequently dropped in the iPhone 4S, iPhone 5 and later iPads. Apple let go of the sensors, which were made available at that time in a combination—or combo package—in favor of discrete solutions consisting of individual proximity as well as ALS sensors for its products. While combo sensors offer the convenience of a single configured package and sourcing from a single supplier, discrete solutions can offer flexibility in the choice of sensor.

Samsung, meanwhile, has gone on to use light and proximity sensors in even larger quantities than Apple. Last year Samsung included an RGB, proximity and infrared (IR) combo sensor, for both its Galaxy SIII smartphone and flagship Galaxy Note 2 device that the company termed as a “phablet.” This year, Samsung deployed a discrete RGB sensor in its latest smartphone, the Galaxy S4, switching from a combo package due to lack of availability of a combo sensor with gesture capability. Samsung’s move toward using RGB sensors in its high-end handsets currently sets the tone for the RGB sensor market given Samsung’s high unit sales. Such a move by the South Korean maker is expected to open the door for other brands to also include RGB sensors in their handsets and tablets, IHS believes.

The new gesture functionality, such as that found in the Galaxy S4, will see especially vigorous growth in the years to come, with revenue enjoying an astonishing 44 percent compound annual growth rate from 2013 to 2017. Maxim Integrated Solutions of California provides the discrete gesture solution for the Galaxy S4, but Japan’s Sharp will be producing a combo sensor product with gesture capabilities by September this year.

Sensor suppliers and buyers tussle
Samsung and Apple are the top buyers of light sensors, accounting for more than 50 percent of light sensor revenue last year. Samsung pulled away from Apple after impressive 90 percent growth in sensor purchases between 2011 and 2012, compared to Apple’s 54 percent growth rate of spend during the period.

This is due to Samsung’s shift toward RGB sensors in its Note 2 and SIII devices, which command higher average selling prices. In third place after Samsung and Apple is a collective group of original equipment manufacturers from China. Included here are global players with significant name recognition like Huawei Technologies, ZTE and Lenovo, as well as a multitude of lesser-known companies such as Coolpad and Xiaomi.

Meanwhile, the top sensor suppliers are Austrian-based ams via its Taos unit in Texas, which supplies to Apple; and Capella Microsystems from Taiwan, the top light sensor supplier to Samsung. Together the two manufacturers furnish more than half of the light sensor market. Other important sensor makers are Avago Technologies from California and Sharp from Japan.

It’s apparent that the world’s appetite for electronics has never been greater. That has increasingly taken the form of mobile electronics, including smartphones, tablets and tablets and the new “phablets.” People want to watch movies and live sports on their phones. They want their mobile devices to be “situationally aware” and even capable of monitoring their health through sensors. That drives higher bandwidth (6G is on the drawing board), faster data rates and a demand for reduced power consumption to conserve battery life. At the same time, “big data” and the internet of things (IoT) are here, which drives the demand for server networks and high performance semiconductors, as well as integrated sensors and inventive gadgets such as flexible displays and human biosensor networks.

All of this is pushing the semiconductor manufacturing industry and related industry (MEMS, displays, packaging and integration, batteries, etc.) in new directions. The tradeoffs that chipmakers must manager between power, performance, area and cost/complexity (PPAC) are now driven not by PCs, but by mobile devices.

In a keynote address at Semicon West 2013, Ajit Monacha, CEO of Global Foundries, expanded on his Foundry 2.0 concept, talking about how the requirements of mobile devices were, in fact, changing the entire semiconductor industry. He noted that the mobile business is forecast to be double the size of the PC market in 2016. The mobile business drives many new requirements, said Manocha, including power, performance and features, higher data rates, high resolution multicore processors and thinner form factors.

Manocha presented the audience with what he sees as today’s Big Five Challenges: cost, device architectures, lithography and EUV, packaging and the 450mm wafer transition. I don’t recall when cost wasn’t an issue, but an audience poll revealed that most people believe economic challenges will be the main factor limiting industry growth, not technical challenges. I agree, but I’m also thinking new applications will emerge particularly in the health field that could push the industry in yet another new direction.

Peter Singer, Editor-in-Chief