Category Archives: Online Magazines

By Pete Singer, Editor-in-Chief

A new roadmap, the Heterogeneous Integration Technology Roadmap for Semiconductors (HITRS), aims to integrate fast optical communication made possible with photonic devices with the digital crunching capabilities of CMOS.

The roadmap, announced publicly for the first time at The ConFab in June, is sponsored by IEEE Components, Packaging and Manufacturing Technology Society (CPMT), SEMI and the IEEE Electron Devices Society (EDS).

Speaking at The ConFab, Bill Bottoms, chairman and CEO of 3MT Solutions, said there were four significant issues driving change in the electronics industry that in turn drove the need for the new HITRS roadmap: 1) The approaching end of Moore’s Law scaling of CMOS, 2) Migration of data, logic and applications to the Cloud, 3) The rise of the internet of things, and 4) Consumerization of data and data access.

“CMOS scaling is reaching the end of its economic viability and, for several applications, it has already arrived. At the same time, we have migration of data, logic and applications to the cloud. That’s placing enormous pressures on the capacity of the network that can’t be met with what we’re doing today, and we have the rise of the Internet of Things,” he said. The consumerization of data and data access is something that people haven’t focused on at all, he said. “If we are not successful in doing that, the rate of growth and economic viability of our industry is going to be threatened,” Bottoms said.

These four driving forces present requirements that cannot be satisfied through scaling CMOS. “We have to have lower power, lower latency, lower cost with higher performance every time we bring out a new product or it won’t be successful,” Bottoms said. “How do we do that? The only vector that’s available to us today is to bring all of the electronics much closer together and then the distance between those system nodes has to be connected with photonics so that it operates at the speed of light and doesn’t consume much power. The only way to do this is to use heterogeneous integration and to incorporate 3D complex System-in-Package (SiP) architectures.

The HITRS is focused on exactly that, including integrating single-chip and multi­chip packaging (including substrates); integrated photonics, integrated power devices, MEMS, RF and analog mixed signal, and plasmonics. “Plasmonics have the ability to confine photonic energy to a space much smaller than wavelength,” Bottoms said. More information on the HITRS can be found at: http://cpmt.ieee.org/technology/heterogeneous-integration-roadmap.html

Bottoms said much of the technology exists today at the component level, but the challenge lies in integration. He noted today’s capabilities (Figure 1) include Interconnection (flip-chip and wire bond), antenna, molding, SMT (passives, components, connectors), passives/integrated passive devices, wafer pumping/WLP, photonics layer, embedded technology, die/package stacking and mechanical assembly (laser welding, flex bending).

Building blocks for integrated photonics.

Building blocks for integrated photonics.

“We have a large number of components, all of which have been built, proven, characterized and in no case have we yet integrated them all. We’ve integrated more and more of them, and we expect to accelerate that in the next few years,” he said.

He also said that all the components exist to make very complex photonic integrated circuits, including beam splitters, microbumps, photodetectors, optical modulators, optical buses, laser sources, active wavelength locking devices, ring modulators, waveguides, WDM (wavelength division multiplexers) filters and fiber couplers. “They all exist, they all can be built with processes that are available to us in the CMOS fab, but in no place have they been integrated into a single device. Getting that done in an effective way is one of the objectives of the HITRS roadmap,” Bottoms explained.

He also pointed to the potential of new device types (Figure 2) that are coming (or already here), including carbon nanotube memory, MEMS photonic switches, spin torque devices, plasmons in CNT waveguides, GaAs nanowire lasers (grown on silicon with waveguides embedded), and plasmonic emission sources (that employ quantum dots and plasmons).

New device types are coming.

New device types are coming.

The HITRS committee will meet for a workshop at SEMICON West in July.

Kateeva  announced that it has closed its Series D round with $38 million in financing. The newest participant is Samsung Venture Investment Corporation (SVIC). Existing investors also contributed. They include: Sigma Partners, Spark Capital, Madrone Capital Partners, DBL Investors, New Science Ventures, and VEECO Instruments, Inc.

The company has raised more than $110 million since it was founded in 2008.

Kateeva makes the YIELDjet™ platform — a precision deposition platform that leverages inkjet printing to mass produce flexible and large-size OLED panels. The new funds will be used to support the company’s manufacturing strategy and expand its global sales and support infrastructure. Production systems are currently being built at the company’s facility in Menlo Park, Calif. to fulfill early orders.

The funding news coincides with the 2014 OLEDs World Summit taking place this week in Berkeley, Calif.

“Kateeva is a technology leader and has built a significant business in the OLED space,” said Michael Pachos, Senior Investment Manager at SVIC. “The company has demonstrated both a technical and business vision in driving adoption of OLED displays and lighting, and we look forward to contributing to its progress.”

“We believe that OLEDs on flexible substrates play a major role in the insatiable quest for ultra-durable, high-performance, and unbreakable mobile displays, and Kateeva has proven to hold the keys to a critical industry problem,” said Fahri Diner, Managing Director of Sigma Partners and a member of the Board of Directors of Kateeva. “Moreover, we are very excited about Kateeva’s impressive innovations that are poised to make large-panel OLED televisions finally an affordable reality — perhaps the Holy Grail of the display world. In partnership with SVIC, we’re delighted to offer continued support to Kateeva as they rapidly scale operations to support accelerating demand for OLED manufacturing solutions,” Diner continued.

Kateeva Chief Executive Officer Alain Harrus said: “SVIC’s investment speaks volumes about our technology’s enabling value to world-class OLED producers. It will reinforce our leading position and help serve all our customers better. Also, we appreciate our existing investors for their enduring commitment and trusted guidance. Thanks to their confidence in our technology and execution, mass producing OLEDs will be much smoother for leading display manufacturers.”

Blog Review October 14 2013


October 14, 2013

At the recent imec International Technology Forum Press Gathering in Leuven, Belgium, imec CEO Luc Van den hove provided an update on blood cell sorting technology that combines semiconductor technology with microfluidics, imaging and high speed data processing to detect tumorous cancer cells. Pete Singer reports.

Pete Singer attended imec’s recent International Technology Forum in Leuven, Belgium. There, An Steegan, senior vice president process technology at imec, said FinFETs will likely become the logic technology of choice for the upcoming generations, with high mobility channels coming into play for the 7 and 5nm generation (2017 and 2019). In DRAM, the MIM capacitor will give way to the SST-MRAM. In NAND flash, 3D SONOS is expected to dominate for several generations; the outlook for RRAM remains cloudy.

At Semicon Europa last week, Paul Farrar, general manager of G450C, provided an update on the consortium’s progress in demonstrating 450mm process capability. He said 25 tools will be installed in the Albany cleanroom by the end of 2013, progress has been made on notchless wafers with a 1.5mm edge exclusion zone, they have seen significant progress in wafer quality, and automation and wafer carriers are working.

Phil Garrou reports on developments in 3D integration from Semicon Taiwan. He notes that at the Embedded Technology Forum, Hu of Unimicron looked at panel level embedded technology.

Kathryn Ta of Applied Materials connects how demand for mobile devices is driving materials innovation. She says that about 90 percent of the performance benefits in the smaller (sub 28nm) process nodes come from materials innovation and device architecture. This number is up significantly from the approximate 15 percent contribution in 2000.

Tony Massimini of Semico says the MEMS market is poised for significant growth thanks to major expansion of applications in smart phone and automotive. In 2013, Semico expects a total MEMS market of $16.8 B but by 2017 it will have expanded to $28.5 B, a 70 percent increase in a mere four years time.

Steffen Schulze and Tim Lin of Mentor Graphics look at different options for reducing mask write time. They note that a number of techniques have been developed by EDA suppliers to control mask write time by reducing shot count— from simple techniques to align fragments in the OPC step, to more complex techniques of simplifying the data for individual writing passes in multi-pass writing.

If you want to see SOI in action, look no further than the Samsung Galaxy S4 LTE. Peregrine Semi’s main antenna switch on BSOS substrates from Soitec enables the smartphone to support 14 frequency bands simultaneously, for a three-fold improvement in download times.

Vivek Bakshi notes that a lot of effort goes into enabling EUV sources for EUVL scanners and mask defect metrology tools to ensure they meet the requirements for production level tools. Challenges include modeling of sources, improvement of conversion efficiency, finding ways to increase source brightness, spectral purity filter development and contamination control. These and other issues are among topics that were proposed by a technical working group for the 2013 Source Workshop in Dublin, Ireland.

Silicon Labs, a developer of high-performance, analog-intensive, mixed-signal ICs, today introduced the industry’s most energy-friendly 32-bit microcontrollers (MCUs) based on the ARM Cortex-M0+ processor. The EFM32 Zero Gecko MCU family is designed to achieve the lowest system energy consumption for a wide range of battery-powered applications such as mobile health and fitness products, smart watches, activity trackers, smart meters, security systems and wireless sensor nodes, as well as battery-less systems powered by harvested energy. The new Zero Gecko family is the latest addition to the EFM32 Gecko portfolio pioneered by Energy Micro. The family includes 16 MCU products designed from the ground up to enable the lowest possible energy consumption for connected devices enabling the Internet of Things (IoT).

Read more: The Internet of Things is poised to change everything, says IDC

The EFM32 Zero Gecko MCUs feature an energy management system with five energy modes that enable applications to remain in an energy-optimal state, spending as little time as possible in the energy-hungry active mode. In deep-sleep mode, Zero Gecko MCUs have 0.9 μA standby current consumption with a 32.768 kHz RTC, RAM/CPU state retention, brown-out detector and power-on-reset circuitry active. Active-mode power consumption scales down to 110 µA/MHz at 24 MHz with real-world code (prime number search algorithm) executed from flash. Current consumption is less than 20 nA in shut-off mode. The EFM32 MCUs further reduce power consumption with a 2-microsecond wakeup time from standby mode.

Like all EFM32 Gecko products, the Zero Gecko MCUs include an energy-saving feature called the Peripheral Reflex System (PRS) that significantly enhances system-level energy efficiency. The PRS monitors complex system-level events and allows different MCU peripherals to communicate directly with each other and autonomously without involving the CPU. An EFM32 MCU can watch for a series of specific events to occur before waking the CPU, thereby keeping the Cortex-M0+ processor core in an energy-saving standby mode as long as possible and reducing overall system power consumption.

The EFM32 Zero Gecko MCUs feature many of the same power-saving precision analog peripherals included in Silicon Labs’ popular Tiny Gecko, Giant Gecko and Wonder Gecko devices. These low-energy peripherals include an analog comparator, a supply voltage comparator, an on-chip temperature sensor and a 12-bit analog-to-digital converter (ADC) with 350 μA current consumption at a 1 MHz sample rate.

The EFM32 Zero Gecko devices are the only Cortex-M0+ MCUs on the market that integrate a programmable current digital-to-analog converter (IDAC). This on-chip precision-analog IDAC generates a biasing current from 0.05-64 µA with only 10 nA overhead. The IDAC provides an accurate bias and/or control capability for companion ICs and other external circuits including amplifiers, sensors, Wheatstone bridges and resistor ladders, eliminating the need for external power amplifier components for many cost-sensitive applications.

The Zero Gecko devices are also the only Cortex-M0+ MCUs containing a 128-bit Advanced Encryption Standard (AES) hardware block. With this built-in hardware AES encryption acceleration support, the Zero Gecko MCUs provide an ideal companion for RF transmitters and transceivers used in connected device applications for the Internet of Things.

“The Internet of Things is a huge and exciting market made possible by low-cost, battery-powered connected devices and wireless sensor nodes that sip nanoamps of energy,” said Geir Førre, senior vice president and general manager of Silicon Labs’ microcontroller products. “The IoT market requires battery-friendly Cortex-M0+ based MCUs that save both energy and system cost. Our new EFM32 Zero Gecko MCUs – shipping now at very cost-competitive prices – enable developers to create embedded systems that are four times more energy-efficient than possible with other Cortex-M0/0+ MCUs.”

The EFM32 Zero Gecko family is pin- and software-compatible with Silicon Labs’ broad portfolio of nearly 250 EFM32 Gecko MCU products.

SPTS Technologies and imec announced a joint partnership to further advance micro- and nanosized components for BioMEMS, using SPTS’ Rapier silicon deep reactive ion etching (Si DRIE) technology.

Micro and nanotechnologies are fast becoming key enablers in medical research, diagnosis and treatment, with rapid developments in areas like DNA sequencing and molecular diagnostics. Imec, as one of the pioneers in the field, is developing the underlying heterogeneous technology and components as the backbone to these life science tools.

One of the most important process techniques in BioMEMS manufacturing is deep silicon etching. It can be used to manufacture devices such as microfluidic channels, polymerase chain reaction (PCR) chambers, mixers and filters. As a leading institute in advanced micro and nanoelectronics research, imec is currently developing lab-on-chip technology for fast SNP (single nucleotide polymorphisms) detection in human DNA and a microsized detection system for circulating tumor cells in the human blood stream. The outcome of this research will be products that deliver a better quality of life for current and future generations.

IMEC_NR_SPTS

“We chose SPTS as a partner after running extensive wafer demonstrations on their tool, challenging them on the demanding structures required by our current projects,” says Deniz Sabuncuoglu Tezcan, who is leading imec’s Novel Components Integration team. “The results convinced us that the Rapier module can help us create the devices we envisage. The demos also showed that the processes will deliver the high throughputs and repeatability necessary for cost-effective volume production.”

3D-IC: Two for one


September 25, 2013

Zvi Or-Bach, President & CEO of MonolithIC 3D Inc. blogs about upcoming events related to 3D ICs.

This coming October there are two IEEE Conferences discussing 3D IC, both are within an easy drive from Silicon Valley.

The first one is the IEEE International Conference on 3D System Integration (3D IC), October 2-4, 2013 in San Francisco, and just following in the second week of October is the S3S Conference on October 7-10 in Monterey. The IEEE S3S Conference was enhanced this year to include the 3D IC track and accordingly got the new name S3S (SOI-3D-Subthreshold). It does indicate the growing importance and interest in 3D IC technology.

This year is special in that both of these conferences will contain presentations on the two aspects of 3D IC technologies. The first one is 3D IC by the use of Through -Silicon-Via which some call -“parallel” 3D and the second one is the monolithic 3D-IC which some call “sequential.”

This is very important progress for the second type of 3D IC technology. I clearly remember back in early 2010 attending another local IEEE 3D IC Conference: 3D Interconnect: Shaping Future Technology. An IBM technologist started his presentation titled “Through Silicon Via (TSV) for 3D integration” with an apology for the redundancy in his presentation title, stating that if it 3D integration it must be TSV!

 Yes, we have made quite a lot of progress since then. This year one of the major semiconductor research organization – CEA Leti – has placed monolithic 3D on its near term road-map, and was followed shortly after by a Samsung announcement of mass production of monolithic 3D non volatile memories – 3D NAND.

We are now learning to accept that 3D IC has two sides, which in fact complement each other. In hoping not to over-simplify- I would say that main function of the TSV type of 3D ICs is to overcome the limitation of PCB interconnect as well being manifest by the well known Hybrid Memory Cube consortium, bridging the gap between DRAM memories being built by the memory vendors and the processors being build by the processor vendors. At the recent VLSI Conference Dr. Jack Sun, CTO of TSMC present the 1000x gap which is been open between  on chip interconnect and the off chip interconnect. This clearly explain why TSMC is putting so much effort on TSV technology – see following figure:

System level interconnect gaps

System level interconnect gaps

On the other hand, monolithic 3D’s function is to enable the continuation of Moore’s Law and to overcome the escalating on-chip interconnect gap. Quoting Robert Gilmore, Qualcomm VP of Engineering, from his invited paper at the recent VLSI conference: As performance mismatch between devices and interconnects increases, designs have become interconnect limited. Monolithic 3D (M3D) is an emerging integration technology that is poised to reduce the gap significantly between device and interconnect delays to extend the semiconductor roadmap beyond the 2D scaling trajectory predicted by Moore’s Law…” In IITC11 (IEEE Interconnect Conference 2011) Dr. Kim presented a detailed work on the effect of the TSV size for 3D IC of 4 layers vs. 2D. The result showed that for TSV of 0.1µm – which is the case in monolithic 3D – the 3D device wire length (power and performance) were equivalent of scaling by two process nodes! The work also showed that for TSV of 5.0µm – resulted with no improvement at all (today conventional TSV are striving to reach the 5.0µm size) – see the following chart:

Cross comparison of various 2D and 3D technologies. Dashed lines are wirelengths of 2D ICs. #dies: 4.

Cross comparison of various 2D and 3D technologies. Dashed lines are wirelengths of 2D ICs. #dies: 4.

So as monolithic 3D is becoming an important part of the 3D IC space, we are most honored to have a role in these coming IEEE conferences. It will start on October 2nd in SF when we will present a Tutorial that is open for all conference attendees. In this Monolithic 3DIC Tutorial we plan to present more than 10 powerful advantages being opened up by the new dimension for integrated circuits. Some of those are well known and some probably were not presented before. These new capabilities that are about to open up would very important in various market and applications.

In the following S3S conference we are scheduled on October 8, to provide the 3D Plenary Talk for the 3D IC track of the S3S conference. The Plenary Talk will present three independent paths for monolithic 3D using the same materials, fab equipment and well established semiconductor processes for monolithic 3D IC. These three paths could be used independently or be mixed providing multiple options for tailoring differently by different entities.

Clearly 3D IC technologies are growing in importance and this coming October brings golden opportunities to get a ‘two for one’ and catch up and learn the latest and greatest in TSV and monolithic 3D technologies — looking forward to see you there.

SPICEing up circuit design


September 25, 2013

Dr. Zhihong Liu

Dr. Zhihong Liu, Executive Chairman, ProPlus Design Solutions, blogs about the challenges of designing for yield using SPICE models. 

The ubiquitous SPICE circuit simulator, initially released 40 years ago, made a recent list of the top 10 most significant developments in the history of EDA, as it should. Its widespread use and importance among circuit designers cannot be understated.

However, the third-generation of SPICE (Simulation Program with Integrated Circuit Emphasis) simulation is showing its age. Circuit designers are doing giga-scale simulations because of complex designs, increasingly simulated post-layout and the large number of simulations required to design for variation effects.

Giga-scale designs range from post-layout analog circuits, high-speed I/Os, memory and CMOS image sensor arrays to full-chip power ICs, and clock trees and critical path nets. They require a parallel SPICE simulator with high capacity in the order of tens of millions of elements for analog designs and hundreds of million elements for memory designs. A SPICE simulator needs to deliver high performance with pure SPICE accuracy and offer support for the latest process technologies such as FinFETs.

Three dimensional FinFETs bring additional challenges to device modeling and circuit simulations. Modeling and simulation tools must be able to handle increased layout dependencies in device characteristics and more complex parasitics, including internal parasitics and interactions between the device and surrounding components.

Current SPICE simulators can offer few of these must haves. Traditional SPICE simulators lack capacity even with parallelization capabilities. FastSPICE simulators deliver capacity at the cost of accuracy and are losing steam as an increasing number of designs require post-layout verification that weakens circuit hierarchy. The FastSPICE table model approach and approximated matrix solutions can offer unreliable results and poor usability for complicated giga-scale designs with multiple operating modes and supply voltages.

The key is to maintain simulation accuracy as traditional SPICE simulators do, and simultaneously, be able to handle large circuit simulation capacity that typically only FastSPICE simulators can do with reasonable simulation time. In today’s bleeding-edge designs, designers often can’t settle for performance or capacity by sacrificing accuracy as FastSPICE simulators can.

EDA vendors are aware of these trends and the increasingly urgent market needs. Almost all existing SPICE and FastSPICE simulators have been working hard to utilize parallel technologies on multicore and/or multi-CPU computing environments to improve simulation performance. However, patched-on parallelization offers short-term improvement, and can’t fully meet the need for simulation accuracy, performance and memory consumption for giga-scale circuit designs.

New simulation technology is essential for deep-nanometer technology designs where process variations impact circuit yield and performance. In addition to capacity challenges related to increasing circuit size, designers need to run large numbers of repeated simulations to tackle the impact of process variations. Process-Voltage-Temperature (PVT) analysis and statistical Monte Carlo analysis create another challenge dimension for giga-scale simulations.

In a circuit designer’s ideal world, the next-generation SPICE circuit simulator would be highly accurate with full SPICE analysis features and support for industry-standard inputs and outputs. It would be much, much faster than traditional SPICE simulators and able to handle all circuit types. The ability to simulate giga-scale circuits and challenging post-layout designs is mandatory. Building parallelization in a SPICE simulator from the ground up instead of patched-on solutions is the key to handling giga-scale simulations with good performance and memory consumption, while still offering SPICE accuracy. Most aging circuit simulators will soon show their limitations.

Ideally, the new SPICE simulator also will have native capabilities to handle process variations from 3-sigma to high-sigma Monte Carlo simulations, where hundreds or even thousands of simulations are needed. Circuit designers have begun to search for Design-for-Yield (DFY) solutions and not just cobbled-together point tools. A total DFY solution starts with a high-capacity, high-performance SPICE simulator as its engine. A simulator designed for DFY with built-in statistical simulation capabilities can provide incomparable simulation performance when compared to ad-hoc variation analysis with external circuit simulators.

And, of course, the SPICE simulation engine should be tightly integrated with statistical transistor model extraction and yield prediction/improvement software. Those components make a total DFY solution, and enable the efficiency and consistency of yield-analysis results.

Giga-scale simulation isn’t the future, it’s here today and needs viable solutions to meet the challenges it has created. SPICE simulators have served the circuit design industry for 40 years, and it’s time for the next generation, essential for deep nanometer technology designs.

Common thermal considerations in LEDs include test point temperature and thermal power.

One characteristic typically associated with While it’s true that LEDs are cool relative to filaments found in incandescent and halogen lamps, they do generate heat within the semiconductor structure, so the system must be designed in such a way that the heat is safely dissipated. The waste heat white LEDs generate in normal operation can damage both the LED and its phosphor coating (which converts the LED’s native blue color to white) unless it’s properly channeled away from the light source.

A luminaire’s thermal design is specified to support continuous operation without heat damage and oftentimes separates the LEDs from temperature-sensitive electronics, which provides an important advantage over individual LED replacement bulbs.

Test point temperature
Test point temperature (Tc) is one characteristic that plays an important role during integration to determine the amount of heat sinking, or cooling, that the luminaire design requires. In general, the higher the Tc limit compared to worst-case ambient temperature (Ta), the more flexibility a luminaire manufacturer will have in designing or selecting a cooling solution.

The worst-case ambient temperature is usually 40ºC or higher, so a module with a low Tc rating (e.g., 65ºC) doesn’t have much headroom above the already hot ambient temperature. Trying to keep a module at Tc 65ºC when the Ta is 40ºC and dissipating 40W thermal power is very difficult to do with a passive heat sink, so a fan or other active heat sink will likely be required. On the other hand, a module with a Tc rating of 90º C or higher (while still meeting lumen maintenance and warranty specifications) has at least 50º C headroom over the ambient temperature and should be able to make use of a reasonably sized passive heat sink.

However, the higher you can push the test point on the LED module, the smaller the heat sink you need. It’s dependent on the Ta – if the module can’t withstand a high enough maximum temperature, it’s impossible to cool below Ta unless you have a refrigerated system, regardless of the size or effectiveness of the heat sink. Stretching the difference between Tc and Ta as much as possible will give you greater room to deviate from the norm and be creative in your heat sink selection.

From phosphor to where the heat sink is located, Xicato is driving Corrected Cold Phosphor to lower the resistance between the phosphor and the heat sink, without having to cool through the hot LEDs. Today, the module output is at 4000 lumens, which wouldn’t have been possible five years ago.

The bottom-line considerations with respect to test point temperature are really flexibility and cost. If a module with a high Tc rating is chosen, there will be more options for design and cost savings than are provided by a module with a low Tc rating, assuming the same power dissipation.

leds_1
Figure 1: Xicato XSM module family sample passive heat sink matrix showing suitable module usage for a range of thermal classes.

Thermal power
Another key characteristic, thermal power (load) has always been a difficult number to deal with. LED module manufacturers don’t always provide the information required to calculate thermal power because this value can change based on such variables as lumen package, Color Rendering Index (CRI), correlated color temperature (CCT), etc. Cooling solutions are often rated for performance in terms of degrees Celsius per watt, which, unfortunately, necessitates calculating the thermal power.

To address this problem, Xicato has developed a “class system,” through which each module variation is evaluated and assigned a “thermal class.” With this system, determining the appropriate cooling solution is as simple as referencing the thermal class from the module’s data sheet to a matrix of heat sinks. FIGURE 1 is a sample passive heat sink thermal class matrix for the Xicato XSM module family.

Let’s take, as an example, a 1300 lumen module with a thermal class rating of “F.” According to the matrix, for an ambient condition of 40°C, the best choice of heat sink would be one that is 70 mm in diameter and 40 mm tall. Validation testing is still required for each luminaire during the design phase, as variations in trims, optics, and mechanical structures can affect performance. Looking at the example module, if a manufacturer were to design a luminaire around this class “F” heat sink and nine months later a new, higher-flux class “F” module were released, the same luminaire would be able to support the higher-lumen module without the need for additional thermal testing. The thermal-class approach supports good design practice, speeds development and product portfolio expansion, and provides a future-proof approach to thermal design and integration.

leds_table

Most specification sheets cite an electrical requirement for the module and the lumen output. Electrical input is basically the voltage the module will require and the current needed to drive it; the product of these two variables is power. The problem with output is that it’s always displayed in lumens – a lumen is not a measure of power, but rather a unit that quantifies and draws optical response to the eye. It’s calibrated specifically on what the human eye sees, but there’s a quality of brightness that comes into play that can’t easily be tied back to electrical power. There’s no way to figure out exactly how much thermal power is being dissipated by the module – power “in” is measured in electrical energy (voltage × current), while power “out” is non-visible electromagnetic, visible electromagnetic, and thermal power. None of this is shown in datasheets.

This intangible factor creates a challenge – for most customers, a watt is a watt, but in reality, there are thermal watts, electrical watts and optical watts; not all are easily determined. The customer can attempt calculations – e.g., how to cool 10 thermal watts – but the fact is that people don’t generally think that way. Many customers don’t have engineers on staff, and those that do often use rough approximations to determine compatibility.

Xicato has defined modules that go up to Class U. The Tc rating, while independent of module flux package, is interrelated. Class A modules, in general, don’t need a heat sink; lower power modules usually achieve about 300 lumens. On the other hand, an XLM 95 CRI product is a Class U product that requires either a passive heat sink or an active heat sink. Once the module and heat sink have been selected and integrated into the luminaire, the next step is thermal validation, which Xicato performs for the specific fixture utilizing an intensive testing process that includes detailed requirements that must be met by the luminaire maker when submitting a fixture for validation (see Table 1 for a partial summary).

The validation is based not on lumens, but on the thermal class model, and the fixture rating is also based on thermal class, rather than wattage, because watts differ. With this approach, an upgrade can be made easily without having to do any retesting. •


JOHN YRIBERRI, Xicato, is the director of Global Application Engineering, Xicato, Inc., San Jose, CA. John joined Xicato in November of 2007 and was the Project leader for Xicato’s first LED platform- the Xicato Spot Module (XSM).

Serial product

This paper is related to an alternative method of generation and propagation of binary signal (Quantum Cellular Automata) that, in most of the literature, is implemented with quantum dots or quantum wells [1],[2]. By using some new different approaches based on graphene structures, the signal processing capabilities of QCA assemblies may be obtained at significantly reduced complexity compared to conventional quantum-based QCA assemblies, which typically operate at very low temperatures. A two-layer graphene structure is presented in order to overcome technological and operating limitations that affect traditional approaches.

graphene_1
Figure 1: Cell configuration conventionally defined as “0” and “1” logic. 

State-of-the-art QCA: Quantum dots

The quantum-dot [3] is an “artificial atom” obtained by including a small quantity of material in a substrate. The definition “artificial atom” for a quantum-dot is intended in an electrical sense, allowing the transition of a single electron (or a single hole) among a couple of them. In this field, the most common technology is based on an indium deposition on a GaAs substrate

This happens because the reticular structure of InAs is quite different from gallium and therefore indium tends to concentrate in very small islands. By using several layers of GaAs, pillow structures can be realized. Considering this vertical structure, location probabilities of electrons and holes can be considered as “distributed” on several dots.

graphene_2
Figure 2: Device cell is driven to “1” due to majority effect. 

Consider that four dots realized on the same layer constitute a QCA cell [2]. In each cell, two extra electrons can assume different locations by tunneling between the dots and providing the cell with a certain polarization. Coulombic repulsion causes the two electrons to occupy antipodal sites within the cell (see FIGURE 1). The dimension of the cells may be around 10 nm.

The array of interacting quantum cells can be considered a Quantum-dot Cellular Automata. However, it must be noted that no tunneling occurs between cells and the polarization of the cell is determined only by Coulombic interaction of its neighboring cells.

QCA operating principles

The status of each cell can therefore be, according to Fig. 1, only in “0” or “1” configuration, depending on the influence of its neighbor, producing a “majority” effect. In other words, the status which is more present at the border of the cell “wins” and it is copied on it due to polarization. An example is given in FIGURE 2.

graphene_3
Figure 3: Examples of QCA structure. 
graphene_4
Figure 4: QCA logic gates. 

Signal propagation happens as a “domino“ effect with a very low power consumption. Simple structures can be easily arranged, as in FIGURE 3.

By fixing the polarization in one of the cells in a “majority” crossing structure, AND and OR gates can be easily obtained (FIGURE 4).

Quantum dots: Current technology

One of the most promising technologies for implementing quantum dots, and therefore quantum cellular automata, is Bose-Einstein Condensates [4]. This approach overcomes the traditional one based on an Indium deposition on a GaAs substrate (small indium islands aggregation due to reticular diversity).

Bose-Einstein Condensates (BEC) are made by ultra-cold atom aggregation (typically rubidium or sodium isotopes) confined using laser manipulations and magnetic fields.

BEC’s properties are quite atypical and therefore are defined as the “fifth phase of matter,” after solids, liquids, gases and plasmas. Every atom in a BEC has the same quantum state, and therefore, a BEC can be considered a “macroscopical atom.” Tunnelling and quantum effects also occur at a macroscopical scale, with advantages on state definition and detection. A major drawback is the very low operating temperature (around 1°K) that may constitute a limit for physical implementation.

graphene_5
Figure 5: Structure of a graphene layer. Selected area is about 4 nm2. 

Proposed technology

In recent years, an increasing interest has been devoted to new materials, whose properties seem to be very promising for nanoscale circuit applications. Graphene [5] is a 0.3 nm thin layer of carbon atoms having a honeycomb structure, whose properties of conductivity, flexibility, transparency could have a deep impact on future integration technology (FIGURE 5).

graphene_6
Figure 6: An example of a graphene layer with four hemispherical “hills”.  

Graphene could also be doped as usual semiconductors are (despite the fact that, from its electrical properties, it can be considered a pure conductor), and therefore, it can be used to build nanometric transistors. However, the most interesting features that suggest graphene as a good material for QCA cells are the following:

      1. In contrast to metallic or semiconductor QCA, the dimensions of molecular automata allow for operation at ambient temperatures because they have greater electrostatic energy [6],[7].
      2. Low power requirements and low heat dissipation allow high density cell disposition [8],[9].
      3. Structure flexibility (see

FIGURE 6

    ) and physical bandgap arrangement allow cells to be built with the bistable behaviour of a two-charge system.

Different techniques are currently available to reshape a graphene layer. Despite the fact that industrial processes are not yet implemented, it is arguable that a serial production of a QCA graphene cell could be possible, and simple, well-defined process steps for the single cell are identified.

graphene_7
Figure 7: Graphene based QCA cell structure. The four hemispherical cavities allow the two negative charges to be hosted in both configurations of logic states.  

Idea for structure and process steps

The basic idea is to realize a square structure with four cavities in which two negative charges (suitable ions or molecules) could be placed and moved depending on neighborhood polarization. Graphene manipulation may allow dimensions of the cells that are quite comparable to the traditional semiconductor Q-dots approach (for a solid state single electron transition cell, the distance among dots is typically 20nm, and the average distance among interacting cells is 60nm). However, in order to cope with the chosen ion charge (Coulombic interaction can be stronger if compared with single electrons) and with process requirements, a slight increase of distances is also possible. The structure is based on a two layer graphene arrangement (see FIGURE 7).

The top layer (Layer 1) needs some more process steps in order to realize the four hemispherical cavities. The different energy levels among the layers (obtained by establishing different potentials for the two conductors) forces negative charges, in absence of external polarization, to stay on the bottom of the holes. Supposing a dimension of 15-20nm for each cavity in order to host suitable electronegative molecules or ions (e.g. Cl-, F-, SO4–), the process steps could be the following:

Layer 2 definition (bottom layer process steps):

    1. Graphene chemical vapor deposition (CVD) on copper.
    2. Graphene (layer 2) transfer on the target substrate (through copper wet etching and standard transfer techniques).

Layer 1 definition (top layer process steps):

    1. Graphene chemical vapor deposition (CVD) on copper.
    2. Resist spin-coating (ex: PolyMethylMethAcrylate, PMMA) on graphene CVD (on copper).
    3. E-Beam lithography for hemispherical cavities definition.
    4. Resist selective removal (ex: TetraMethyl Ammonium Hydroxide chemistry).
    5. Graphene etching (plasma O2).
    6. Resist removal (ex: acetone).
    7. Graphene (layer 1) transfer on layer 2 (through copper wet etching and standard transfer techniques).

In addition to the “physical” bandgap realized with this structure, an electronic bandgap could be created on Layer 1 during the third process step. Defects induced in hemispherical cavities may allow a bandgap of 1.2eV to be reached.

Signal transduction of the resulting logic level

After the signal processing performed by the QCA network, the resulting logic state is stored in the last QCA cell. In order to be used by other electronic devices, this information has to be converted into a suitable voltage level. From an operative point of view, it is sufficient to detect a negative charge in the right up position of the last cell; if it is present, according to Fig. 1, the logic state is “1”, otherwise it is “0”. This operation is not so trivial, due to the quantity of the charge to be detected and to the small dimension of its location. To this end, among several different strategies, two approaches could be suitable: the ion approach and the optical approach.

Ion approach.This approach can be performed by using channel electron multipliers (or channeltrons), which are ion detectors with high amplification (108). Every ion can generate a cascade of electrons inside the detector, and therefore, consistent charge pulses that can be counted. In our case, there is no ion flux across a surface, and therefore, counting is not needed (information is only ion presence or absence). However, the detection area is very small (quarter cell). This problem could be solved by attaching carbon nanotubes (e.g. ,10nm diameter each) to charge pulse amplifier terminals, in order to increase their resolution, acting as nano-guides.

Optical approach. The basic principle of this approach is in theory quite simple: in order to detect an object of nanometric dimensions like molecules or ions, a suitable wavelength waveform should be used. For the described application, X-ray radiation seems to be the most appropriate, ranging its wavelength among 1 pm and 10 nm. However, the complexity of the detection set (high precision is needed in order to minimize bit error) and the huge number of the transducers that require large numbers of bit conversions, may in some cases indicate this solution as too expensive with respect to the ion approach. •

References

1.W. Porod, World Scientific Series on Nonlinear Science, 26, 495 (1999).

2.I. Amlani et al., Science, 284, 289 (1999).

3.G. Tóth et al., J. Appl. Phys., 85, 2977 (1999).

4.J.R. Ensher et al., Phys. Rev. Lett. 77, 1996 (1996).

5.K. Novoselov et al., Science, 306, 666 (2004)

6.X. Du et al., Nature, 462, 192 (2009)

7.K. I. Bolotin et al., Phys. Rev. Lett., 101, 096802 (2008)

8.F. Schwierz, Nat. Nanotechnol., 5, 487 (2010)

9.D. Frank, et al., IEEE Electron Dev. Lett., 19, 385. (1998)


DOMENICO MASSIMO PORTO is a systems analysis specialist staff engineer, audio and body division technical marketing, automotive product group, STMicroelectronics, Milan, Italy.

Power device characterization and reliability testing require instrumentation capable of sourcing higher voltages and more sensitive current measurements than ever before.

Silicon carbide (SiC), gallium nitride (GaN), and similar wide bandgap semiconductor materials offer physical properties superior to those of silicon, which allows for power semiconductor devices based on these materials to withstand high voltages and temperatures. These properties also permit higher frequency response, greater current density and faster switching. These emerging power devices have great potential, but the technologies necessary to create and refine them are less mature than silicon technology. For IC fabricators, this presents significant challenges associated with designing and characterizing these devices, as well as process monitoring and reliability issues.

Before wide bandgap devices can gain commercial acceptance, their reliability must be proven and the demand for higher reliability is growing. The continuous drive for greater power density at the device and package levels creates consequences in terms of higher temperatures and temperature gradients across the package. New application areas often mean more severe ambient conditions. For example, in automotive hybrid traction systems, the temperature of the cooling liquid for the combustion engine may reach up to 120°C. In order to provide sufficient margin, this means the maximum junction temperature (TJMAX) must be increased from 150°C to 175°C. In safety-critical applications such as aircraft, the zero defect concept has been proposed to meet stricter reliability requirements.

HTRB reliability testing
Along with the drain-source voltage (VDS) ramp test, the High Temperature Reverse Bias (HTRB) test is one of the most common reliability tests for power devices. In a VDS ramp test, as the drain-source voltage is stepped from a low voltage to a voltage that’s higher than the rated maximum drain-source voltage, specified device parameters are evaluated. This test is useful for tuning the design and process conditions, as well as verifying that devices deliver the performance specified on their data sheets. For example, Dynamic RDS(ON), monitored using a VDS ramp test, provides a measurement of how much a device’s ON-resistance increases after being subjected to a drain bias. A VDS ramp test offers a quick form of parametric verification; in contrast, an HTRB test evaluates long-term stability under high drain-source bias. HTRB tests are intended to accelerate failure mechanisms that are thermally activated through the use of biased operating conditions. During an HTRB test, the device samples are stressed ator slightly less than the maximum rated reverse breakdown voltage (usually 80 or 100% of VRRM) at an ambient temperature close to their maximum rated junction temperature (TJMAX) over an extended period (usually 1,000 hours).

Because HTRB tests stress the die, they can lead to junction leakage. There can also be parametric changes resulting from the release of ionic impurities onto the die surface, from either the package or the die itself. This test’s high temperature accelerates failure mechanisms according to Arrhenius equation, which states the temperature dependence of reaction rates. Therefore, this simulates a test conducted for a much longer period at a lower temperature. The leakage current is continuously monitored throughout the HTRB test and a fairly constant leakage current is generally required to pass it. Because it combines electrical and thermal stress, this test can be used to check the junction integrity, crystal defects and ionic-contamination level, which can reveal weaknesses or degradation effects in the field depletion structures at the device edges and in the passivation.

Instrument and measurement considerations
Power device characterization and reliability testing require instrumentation capable of sourcing higher voltages and more sensitive current measurements than ever before. During operation, power semiconductor devices undergo both electrical and thermal stress: when in the ON state, they have to pass tens or hundreds of amps with minimal loss (low voltage, high current); when they are OFF, they have to block thousands of volts with minimal leakage currents (high voltage, low current). Additionally, during the switching transient, they are subject to a brief period of both high voltage and high current. The high current experienced during the ON state generates a large amount of heat, which may degrade device reliability if it is not dissipated efficiently.

Reliability tests typically involve high voltages, long test times, and often multiple devices under test (wafer level testing). As a result, to avoid breaking devices, damaging equipment, and losing test data, properly designed test systems and measurement plans are essential. Consider the following factors when configuring test systems and plans for executing VDS ramp and HTRB reliability tests needed for device connections, current limit control, stress control, proper test abort design, and data management.

Device connections: Depending on the number of instruments and devices or the probe card type, various connection schemes can be used to achieve the desired stress configurations. When testing a single device, a user can apply voltage at the drain only for VDS stress and measure, which requires only one source measure unit (SMU) instrument per device. Alternatively, a user can connect each gate and source to a SMU instrument for more control in terms of measuring current at all terminals, extend the range of VDS stress, and set voltage on the gate to simulate a practical circuit situation. For example, to evaluate the device in the OFF state (including HTRB test), the gate-source voltage (VGS) might be set to VGS 0 for a P-channel device, or VGS = 0 for an enhancement mode device. Careful consideration of device connections is essential for multi-device testing. In a vertical device structure, the drain is common; therefore, it is not used for stress sourcing so that stress will not be terminated in case a single device breaks down. Instead, the source and gate are used to control stress.

Current limit control: Current limit should allow for adjustment at breakdown to avoid damage to the probe card and device. The current limit is usually set by estimating the maximum current during the entire stress process, for example, the current at the beginning of the stress. However, when a device breakdown occurs, the current limit should be lowered accordingly to avoid the high level current, which would clamp to the limit, melting the probe card tips and damaging the devices over an extended time. Some modern solutions offer dynamic limit change capabilities, which allow setting a varying current limit for the system’s SMU instruments when applying the voltage. When this function is enabled, the output current is clamped to the limit (compliance value) to prevent damage to the device under test (DUT).

Stress control: The high voltage stress must be well controlled to avoid overstressing the device, which can lead to unexpected device breakdown. Newer systems may offer a “soft bias” function that allows the forced voltage or current to reach the desired value by ramping gradually at the start or the end of the stress, or when aborting the test, instead of changing suddenly. This helps to prevent in-rush currents and unexpected device breakdowns. In addition, it serves as a timing control over the process of applying stress.

Proper test abort design: The test program must be designed in a way that allows the user to abort the test (that is, terminate the test early) without losing the data already acquired. Test configurations with a “soft abort” function offer the advantage that test data will not be lost at the termination of the test program, which is especially useful for those users who do not want to continue the test as planned. For instance, imagine that 20 devices are being evaluated over the course of 10 hours in a breakdown test and one of the tested devices exhibits abnormal behavior (such as substantial leakage current). Typically, that user will want to stop the test and redesign the test plan without losing the data already acquired.

Data management: Reliability tests can run over many hours, days, or weeks, and have the potential to amass enormous datasets, especially when testing multiple sites. Rather than collecting all the data produced, systems with data compression functions allow logging only the data important to that particular work. The user can choose when to start data compression and how the data will be recorded. For example, data points can be logged when the current shift exceeds a specified percentage as compared to previously logged data and when the current is higher than a specified noise level.

A comprehensive hardware and software solution is essential to address these test considerations effectively, ideally one that supports high power semiconductor characterization at the device, wafer and cassette levels. The measurement considerations described above, although very important, are too often left unaddressed in commercial software implementations. The software should also offer sufficient flexibility to allow users to switch easily between manual operation for lab use and fully automated operation for production settings, using the same test plan. It should also be compatible with a variety of sourcing and measurement hardware, typically various models of SMU instruments equipped with sufficient dynamic range to address the application’s high power testing levels.

With the right programming environment, system designers can readily configure test systems with anything from a few instruments on a benchtop to an integrated, fully automated rack of instruments on a production floor, complete with standard automatic probers. For example, Keithley’s Automated Characterization Suite (ACS) integrated test plan and wafer description function allow setting up single or multiple test plans on one wafer and selectively executing them later, either manually or automatically. This test environment is compatible with many advanced SMU instruments, including low current SMU instruments capable of sourcing up to 200V and measuring with 0.1fA resolution and high power SMU instruments capable of sourcing up to 3kV and measuring with 1fA resolution.

reliability_1
Figure 1: Example of a stress vs. time diagram for Vds_Vramp test for a single device and the associated device connection. Drain, gate and source are each connected to an SMU instrument respectively. The drain is used for VDS stress and measure; the VDS range is extended by a positive bias on drain and a negative bias on source. A soft bias (gradual change of stress) is enabled at the beginning and end of the stress (initial bias and post bias). Measurements are performed at the “x” points.

The test development environment includes a VDS breakdown test module that’s designed to apply two different stress tests across the drain and source of the MOSFET structure (or across the collector and emitter of an IGBT) for VDS ramp and HTRB reliability assessment.

Vds_Vramp – This test sequence is useful for evaluating the effect of a drain-source bias on the device’s parameters and offers a quick method of parametric verification (FIGURE 1). It has three stages: optional pre-test, main stress-measure, and optional post-test. During the pre-test, a constant voltage is applied to verify the initial integrity of the body diode of the MOSFET; if the body diode is determined to be good, the test proceeds to the main stress-measure stage. Starting at a lower level, the drain-source voltage stress is applied to the device and ramps linearly to a point higher than the rated maximum voltage or until the user-specified breakdown criteria is reached. If the tested device is not broken at the main stress stage, the test proceeds to the next step, the post-test, in which a constant voltage is applied to evaluate the state of the device, similar to the pre-test. The measurements throughout the test sequence are made at both source and gate for multi-device testing (or drain for the single device case) and the breakdown criteria will be based on the current measured at source (or drain for a single device).

Vds_Constant –This test sequence can be set up for reliability testing over an extended period and at elevated temperature, such as an HTRB test (FIGURE 2). The Vds_Constant test sequence has a structure similar to that of the Vds_Vramp with a constant voltage stress applied to the device during the stress stage and different breakdown settings. The stability of the leakage current (IDSS) is monitored throughout the test.

FIGURE 3. Example of stress vs. time diagram for Vds_Constant test sequence for vertical structure and multi-device case and the associated device connection. Common drain, gate and source are each connected to an SMU instrument respectively. The source is used for VDS stress and measure; the VDS range is extended by a positive bias on the drain and a negative bias on the source. A soft bias (gradual change of stress) is enabled at the beginning and end of the stress (initial bias and post bias). Measurements are performed at the “x” points.

FIGURE 3. Example of stress vs. time diagram for Vds_Constant test sequence for vertical structure and multi-device case and the associated device connection. Common drain, gate and source are each connected to an SMU instrument respectively. The source is used for VDS stress and measure; the VDS range is extended by a positive bias on the drain and a negative bias on the source. A soft bias (gradual change of stress) is enabled at the beginning and end of the stress (initial bias and post bias). Measurements are performed at the “x” points.

Conclusion

HTRB testing offers wide bandgap device developers invaluable insights into the long-term reliability and performance of their designs. •


LISHAN WENG is an applications engineer at Keithley Instruments, Inc. in Cleveland, Ohio, which is part of the Tektronix test and measurement portfolio. [email protected].