Tag Archives: letter-mems-tech

A simple way to turn carbon nanotubes into valuable graphene nanoribbons may be to grind them, according to research led by Rice University.

The trick, said Rice materials scientist Pulickel Ajayan, is to mix two types of chemically modified nanotubes. When they come into contact during grinding, they react and unzip, a process that until now has depended largely on reactions in harsh chemical solutions.

The research by Ajayan and his international collaborators appears in Nature Communications.

To be clear, Ajayan said, the new process is still a chemical reaction that depends on molecules purposely attached to the nanotubes, a process called functionalization. The most interesting part to the researchers is that a process as simple as grinding could deliver strong chemical coupling between solid nanostructures and produce novel forms of nanostructured products with specific properties.

“Chemical reactions can easily be done in solutions, but this work is entirely solid state,” he said. “Our question is this: If we can use nanotubes as templates, functionalize them and get reactions under the right conditions, what kinds of things can we make with a large number of possible nanostructures and chemical functional groups?”

The process should enable many new chemical reactions and products, said Mohamad Kabbani, a graduate student at Rice and lead author of the paper. “Using different functionalities in different nanoscale systems could revolutionize nanomaterials development,” he said.

Highly conductive graphene nanoribbons, thousands of times smaller than a human hair, are finding their way into the marketplace in composite materials. The nanoribbons boost the materials’ electronic properties and/or strength.

“Controlling such structures by mechano-chemical transformation will be the key to find new applications,” said co-author Thalappil Pradeep, a professor of chemistry at the Indian Institute of Technology Chennai. “Soft chemistry of this kind can happen in many conditions, contributing to better understanding of materials processing.”

In their tests, the researchers prepared two batches of multi-walled carbon nanotubes, one with carboxyl groups and the other with hydroxyl groups attached. When ground together for up to 20 minutes with a mortar and pestle, the chemical additives reacted with each other, triggering the nanotubes to unzip into nanoribbons, with water as a byproduct.

“That serendipitous observation will lead to further systematic studies of nanotubes reactions in solid state, including ab-initio theoretical models and simulations,” Ajayan said. “This is exciting.”

The experiments were duplicated by participating labs at Rice, at the Indian Institute of Technology and at the Lebanese American University in Beirut. They were performed in standard lab conditions as well as in a vacuum, outside in the open air and at variable humidity, temperatures, times and seasons.

The researchers who carried out the collaboration on three continents still don’t know precisely what’s happening at the nanoscale. “It is an exothermic reaction, so the energy’s enough to break up the nanotubes into ribbons, but the details of the dynamics are difficult to monitor,” Kabbani said. “There’s no way we can grind two nanotubes in a microscope and watch it happen. Not yet, anyway.”

But the results speak for themselves.

“I don’t know why people haven’t explored this idea, that you can control reactions by supporting the reactants on nanostructures,” Ajayan said. “What we’ve done is very crude, but it’s a beginning and a lot of work can follow along these lines.”

CEA-Leti is hosting its seventh workshop on innovative memory technologies following the 17th annual LetiDays Grenoble, June 24-25, on the Minatec campus.

Topics at LetiWorkshop Memory on June 26 will range from short-term to long-term memory solutions, including:

  • Flash memories for embedded or stand-alone applications
  • Resistive memory technologies, such as phase-change memories, conductive bridging memories, oxide-based memories
  • Innovative ideas covering non-volatile logics and bio-inspired architectures

The workshop will feature presentations by industrial and academic researchers with two main sessions in the morning. The first one, “NVM vision on standalone and embedded markets”, includes presentations by STMicroelectronics, Silicon Storage Technology and HGST, and the second one, “Emerging memory opportunities,” includes talks from Yole, IBM and Micron.

The afternoon is dedicated to niche applications and outlooks such as “NVM in disruptive applications”. This session will include talks on security applications, radiation effects and FPGA. The final session, “Memories for biomedical & neuromorphic applications”, features talks from Clinatec and the University of Milan.

Invited speakers are:

– STMicroelectronics, Delphine Maury

– SST, Nhan Do

– HGST, Jeff Childress

– CEA-Leti, Fabien Clermidy

– Yole Developpement, Yann De Charentenay

– IBM, Milos Stanisavljevic

– CEA-Leti, Gabriel Molas

– Micron, Innocenzo Tortorelli

 

–      CEA-Tech, Romain Wacquez

–      University of Padova, Alessandro Paccagnella

–      CEA-Leti, Boubacar Traore

–      CEA-Leti, Jeremy Guy

–      CEA-Clinatec, François Berger

–      University of Milan, Daniele Ielmini

–      CEA-Leti, Daniele Garbin

 

Visit LetiWorkshop Memory for registration and other information.

Two young researchers working at the MIPT Laboratory of Nanooptics and Plasmonics, Dmitry Fedyanin and Yury Stebunov, have developed an ultracompact highly sensitive nanomechanical sensor for analyzing the chemical composition of substances and detecting biological objects, such as viral disease markers, which appear when the immune system responds to incurable or hard-to-cure diseases, including HIV, hepatitis, herpes, and many others. The sensor will enable doctors to identify tumor markers, whose presence in the body signals the emergence and growth of cancerous tumors.

This image shows the principle of the sensor. CREDIT: Dmitry Fedyanin and Yury Stebunov

This image shows the principle of the sensor.
CREDIT: Dmitry Fedyanin and Yury Stebunov

The sensitivity of the new device is best characterized by one key feature: according to its developers, the sensor can track changes of just a few kilodaltons in the mass of a cantilever in real time. One Dalton is roughly the mass of a proton or neutron, and several thousand Daltons are the mass of individual proteins and DNA molecules. So the new optical sensor will allow for diagnosing diseases long before they can be detected by any other method, which will pave the way for a new-generation of diagnostics.

The device, described in an article published in the journal Scientific Reports, is an optical or, more precisely, optomechanical chip. “We’ve been following the progress made in the development of micro- and nanomechanical biosensors for quite a while now and can say that no one has been able to introduce a simple and scalable technology for parallel monitoring that would be ready to use outside a laboratory. So our goal was not only to achieve the high sensitivity of the sensor and make it compact, but also make it scalabile and compatibile with standard microelectronics technologies,” the researchers said.

Unlike similar devices, the new sensor has no complex junctions and can be produced through a standard CMOS process technology used in microelectronics. The sensor doesn’t have a single circuit, and its design is very simple. It consists of two parts: a photonic (or plasmonic) nanowave guide to control the optical signal, and a cantilever hanging over the waveguide.

A cantilever, or beam, is a long and thin strip of microscopic dimensions (5 micrometers long, 1 micrometer wide and 90 nanometers thick), connected tightly to a chip. To get an idea how it works, imagine you press one end of a ruler tightly to the edge of a table and allow the other end to hang freely in the air. If you touch the latter with your other hand and then take your hand away, the ruler will start making mechanical oscillations at a certain frequency. That’s how the cantilever works. The difference between the oscillations of the ruler and the cantilever is only the frequency, which depends on the materials and geometry: while the ruler oscillates at several tens of hertz, the frequency of the cantilever’s oscillations is measured in megahertz. In other words, it makes a few million oscillations per second!

There are two optical signals going through the waveguide during oscillations: the first one sets the cantilever in motion, and the second one allows for reading the signal containing information about the movement. The inhomogeneous electromagnetic field of the control signal’s optical mode transmits a dipole moment to the cantilever, impacting the dipole at the same time so that the cantilever starts to oscillate.

The sinusoidally modulated control signal makes the cantilever oscillate at an amplitude of up to 20 nanometers. The oscillations determine the parameters of the second signal, the output power of which depends on the cantilever’s position.

The highly localized optical modes of nanowave guides, which create a strong electric field intensity gradient, are key to inducing cantilever oscillations. Because the changes of the electromagnetic field in such systems are measured in tens of nanometers, researchers use the term “nanophotonics” – so the prefix “nano” is not used here just as a fad! Without the nanoscale waveguide and the cantilever, the chip simply wouldn’t work. Abig cantilever cannot be made to oscillate by freely propagating light, and the effects of chemical changes to its surface on the oscillation frequency would be less noticeable..

Cantilever oscillations make it possible to determine the chemical composition of the environment in which the chip is placed. That’s because the frequency of mechanical vibrations depends not only on the materials’ dimensions and properties, but also on the mass of the oscillatory system, which changes during a chemical reaction between the cantilever and the environment. By placing different reagents on the cantilever, researchers make it react with specific substances or even biological objects. If you place antibodies to certain viruses on the cantilever, it’ll capture the viral particles in the analyzed environment. Oscillations will occur at a lower or higher amplitude depending on the virus or the layer of chemically reactive substances on the cantilever, and the electromagnetic wave passing through the waveguide will be dispersed by the cantilever differently, which can be seen in the changes of the intensity of the readout signal.

Calculations done by the researchers showed that the new sensor will combine high sensitivity with a comparative ease of production and miniature dimensions, allowing it to be used in all portable devices, such as smartphones, wearable electronics, etc. One chip, several millimeters in size, will be able to accommodate several thousand such sensors, configured to detect different particles or molecules. The price, thanks to the simplicity of the design, will most likely depend on the number of sensors, being much more affordable than its competitors.

CEA-Leti today announced that it has demonstrated a path to fabricating high-density micro-LED arrays for the next generation of wearable and nomadic systems in a process that is scalable to the IC manufacturing process.

The high-brightness, enhanced-vision systems such as head-up and head-mounted displays can improve safety and performance in fields such as aeronautics and automotive, where the displays allow pilots and drivers to receive key navigation data and information in their line of sight. For consumers, smart glasses or nomadic projection devices with augmented reality provide directions, safety updates, advertisements and other information across the viewing field. LED microdisplays are ideally suited for such wearable systems because of their low footprint, low power consumption, high-contrast ratio and ultra-high brightness.

Leti researchers have developed gallium-nitride (GaN) and indium gallium-nitride (InGaN) LED technology for producing high-brightness, emissive microdisplays for these uses, which are expected to grow dramatically in the next three to five years. For example, the global research firm MarketsandMarkets forecasts the market for head-up displays alone to grow from $1.37 billion in 2012 to $8.36 billion in 2020.

“Currently available microdisplays for both head-mounted and compact head-up applications suffer from fundamental technology limitations that prevent the design of very low-weight, compact and low-energy-use products,” said Ludovic Poupinet, head of Leti’s Optics and Photonics Department. “Leti’s technology breakthrough is the first demonstration of a high-brightness, high-density micro-LED array that overcomes these limitations and is scalable to a standard microelectronic large-scale process. This technology provides a low-cost, leading-edge solution to companies that want to target the fast-growth markets for wearable vision systems.”

Announced during Display Week 2015 in San Jose, Calif., Leti’s technology innovation is based on micro-LED arrays that are hybridized on a silicon backplane. Key innovations include epitaxial growth of LED layers on sapphire or other substrates, micro-structuration of LED arrays (10μm pitches or smaller), and 3D heterogeneous integration of such LED arrays on CMOS active-matrices.

These innovations make it possible to produce a brightness of 1 million cd/m² for monochrome devices and 100 kcd/m² for full-color devices with a device size below one inch and 2.5 million pixels. This is a 100- to 1,000-times improvement compared to existing self-emissive microdisplays, with very good power efficiency. The technology also will allow fabrication of very compact products that significantly reduce system-integration constraints.

The high-density micro-LED array process was developed in collaboration with III-V Lab.

CEA-Leti plays a role in the development of the Internet of Things as a provider of key underlying technologies that help its partners take advantage of the opportunities the IoT will offer. These technologies include new sensors, energy-harvesting systems, ultra-low-power communication technologies and ultra-low-power digital processors.

Building on this foundation, the 17th annual LetiDays Grenoble on June 24-25 will expand the conversation with presentations about Internet of Things-augmented mobility, which is revolutionizing the way we interact with appliances, infrastructure and countless common objects that are part of our daily lives.

Another conference theme that includes the IoT and broader markets is Leti’s breakthroughs in silicon technologies, sensors, telecommunications, power management in wearable systems, health applications, transportation and cities of the future. We will focus on how to bring increases in performance, efficiency and security to these fields and markets.

The two-day event will feature more than 40 conferences, many networking opportunities, plus showroom and exhibition halls.

Speakers include:

  • Leti CEO Marie-Noëlle Semeria
  • Suresh Venkatesan, SVP technology development, GLOBALFOUNDRIES
  • Jean-Pierre Cojan, Director, Strategy and Transformation, Safran and
  • Christophe Fourtet, Chief Science Officer and co-founder, SIGFOX. 

In addition, Prof. Alim Louis Benabid will give a keynote address on June 24 at 6 pm. Benabid is a neurosurgeon and board chairman of the Edmond J. Safra Biomedical Research Center Clinatec on the Minatec campus in Grenoble. He has won multiple awards for his pioneering work in the treatment for Parkinson’s disease, including the Albert-Lasker Prize in 2014 and the Breakthrough Prize in Life Sciences in 2015.

Leti also will host technology workshops during the week of June 22:

Samsung Electronics Co., Ltd. today announced the Samsung ARTIK platform to allow faster, simpler development of new enterprise, industrial and consumer applications for the Internet of Things (IoT). ARTIK is an open platform that includes a best-in-class family of integrated production-ready modules, advanced software, development boards, drivers, tools, security features and cloud connectivity designed to help accelerate development of a new generation of better, smarter IoT devices, solutions and services.

“We are providing the industry’s most advanced, open and secure platform for developing IoT products”, said Young Sohn, president and chief strategy officer, Samsung Electronics. “By leveraging Samsung’s high-volume manufacturing, advanced silicon process and packaging technologies, and extensive ecosystem, ARTIK allows developers to rapidly turn great ideas into market leading IoT products and applications.”

The ARTIK Family

All members of the Samsung ARTIK family incorporate unique embedded hardware security technology, on-board memory and advanced processing power in an open platform. Security is also a key element of the advanced software integrated into the platform, along with the ability to connect to the Internet for cloud-based data analytics and enhanced services. As an open platform, Samsung ARTIK can be easily customized for more rapid deployment of IoT devices and the services that can be delivered using them.

The Samsung ARTIK platform comes in a variety of configurations to meet the specific requirements of a wide range of devices from wearables and home automation, to smart lighting and industrial applications. Initial members of the ARTIK family include:

  • ARTIK 1, the smallest IoT module currently available in the industry at 12mm-by-12mm, combines Bluetooth/BLE connectivity and a nine-axis sensor with best-in-class compute capabilities and power consumption. It is specifically designed for low-power, small form-factor IoT applications.
  • ARTIK 5 delivers an outstanding balance of size, power and price-performance and is ideal for home hubs, drones and high-end wearables. It incorporates a 1GHz dual-core processor and on-board DRAM and flash memory.
  • ARTIK 10 delivers advanced capabilities and high-performance to IoT with an eight-core processor, full 1080p video decoding/encoding, 5.1 audio and 2GB DRAM along with 16GB flash memory. The Samsung ARTIK 10 includes Wi-Fi, Bluetooth/BLE and ZigBee connectivity and is designed for use with home servers, media applications, and in industrial settings.

“Industry requirements for IoT devices vary in terms of battery life, computational horse power and form factor,” said Sohn. “With this family of ARTIK offerings, Samsung is directly addressing the needs of the widest range of customers, uses and applications.”

Researchers at the University of California, Riverside Bourns College of Engineering and the Russian Academy of Sciences have successfully demonstrated pattern recognition using a magnonic holographic memory device, a development that could greatly improve speech and image recognition hardware.

Pattern recognition focuses on finding patterns and regularities in data. The uniqueness of the demonstrated work is that the input patterns are encoded into the phases of the input spin waves.

Clockwise are: photo of the prototype device; schematic of the eight-terminal magnonic holographic memory prototype; and a collection of experimental data obtained for two magnonic matrixes. Credit: UC Riverside

Clockwise are: photo of the prototype device; schematic of the eight-terminal magnonic holographic memory prototype; and a collection of experimental data obtained for two magnonic matrixes.
Credit: UC Riverside

Spin waves are collective oscillations of spins in magnetic materials. Spin wave devices are advantageous over their optical counterparts because they are more scalable due to a shorter wavelength. Also, spin wave devices are compatible with conventional electronic devices and can be integrated within a chip.

The researchers built a prototype eight-terminal device consisting of a magnetic matrix with micro-antennas to excite and detect the spin waves. Experimental data they collected for several magnonic matrixes show unique output signatures correspond to specific phase patterns. The microantennas allow the researchers to generate and recognize any input phase pattern, a big advantage over existing practices.

Then spin waves propagate through the magnetic matrix and interfere. Some of the input phase patterns produce high output voltage, and other combinations results in a low output voltage, where “high” and “low” are defined regarding the reference voltage (i.e. output is high if the output voltage is higher than 1 millivolt, and low if the voltage is less than 1 millivolt.

It takes about 100 nanoseconds for recognition, which is the time required for spin waves to propagate and to create the interference pattern.

The most appealing property of this approach is that all of the input ports operate in parallel. It takes the same amount of time to recognize patterns (numbers) from 0 to 999, and from 0 to 10,000,000. Potentially, magnonic holographic devices can be fundamentally more efficient than conventional digital circuits.

The work builds upon findings published last year by the researchers, who showed a 2-bit magnonic holographic memory device can recognize the internal magnetic memory states via spin wave superposition. That work was recognized as a top 10 physics breakthrough by Physics World magazine.

“We were excited by that recognition, but the latest research takes this to a new level,” said Alex Khitun, a research professor at UC Riverside, who is the lead researcher on the project. “Now, the device works not only as a memory but also a logic element.”

The latest findings were published in a paper called “Pattern recognition with magnonic holographic memory device” in the journal Applied Physics Letters. In addition to Khitun, authors are Frederick Gertz, a graduate student who works with Khitun at UC Riverside, and A. Kozhevnikov, Y. Filimonov and G. Dudko, all from the Russian Academy of Sciences.

Holography is a technique based on the wave nature of light which allows the use of wave interference between the object beam and the coherent background. It is commonly associated with images being made from light, such as on driver’s licenses or paper currency. However, this is only a narrow field of holography.

Holography has been also recognized as a future data storing technology with unprecedented data storage capacity and ability to write and read a large number of data in a highly parallel manner.

The main challenge associated with magnonic holographic memory is the scaling of the operational wavelength, which requires the development of sub-micrometer scale elements for spin wave generation and detection.

By combining 3D holographic lithography and 2D photolithography, researchers from the University of Illinois at Urbana-Champaign have demonstrated a high-performance 3D microbattery suitable for large-scale on-chip integration with microelectronic devices.

“This 3D microbattery has exceptional performance and scalability, and we think it will be of importance for many applications,” explained Paul Braun, a professor of materials science and engineering at Illinois. “Micro-scale devices typically utilize power supplied off-chip because of difficulties in miniaturizing energy storage technologies. A miniaturized high-energy and high-power on-chip battery would be highly desirable for applications including autonomous microscale actuators, distributed wireless sensors and transmitters, monitors, and portable and implantable medical devices.”

CREDIT: University of Illinois

CREDIT: University of Illinois

“Due to the complexity of 3D electrodes, it is generally difficult to realize such batteries, let alone the possibility of on-chip integration and scaling. In this project, we developed an effective method to make high-performance 3D lithium-ion microbatteries using processes that are highly compatible with the fabrication of microelectronics,” stated Hailong Ning, a graduate student in the Department of Materials Science and Engineering and first author of the article, “Holographic Patterning of High Performance on-chip 3D Lithium-ion Microbatteries,” appearing in Proceedings of the National Academy of Sciences.

“We utilized 3D holographic lithography to define the interior structure of electrodes and 2D photolithography to create the desired electrode shape.” Ning added. “This work merges important concepts in fabrication, characterization, and modeling, showing that the energy and power of the microbattery are strongly related to the structural parameters of the electrodes such as size, shape, surface area, porosity, and tortuosity. A significant strength of this new method is that these parameters can be easily controlled during lithography steps, which offers unique flexibility for designing next-generation on-chip energy storage devices.”

Enabled by a 3D holographic patterning technique–where multiple optical beams interfere inside the photoresist creating a desirable 3D structure–the battery possesses well-defined, periodically structured porous electrodes, that facilitates the fast transports of electrons and ions inside the battery, offering supercapacitor-like power.

“Although accurate control on the interfering optical beams is required to construct 3D holographic lithography, recent advances have significantly simplified the required optics, enabling creation of structures via a single incident beam and standard photoresist processing. This makes it highly scalable and compatible with microfabrication,” stated John Rogers, a professor of materials science and engineering, who has worked with Braun and his team to develop the technology.

“Micro-engineered battery architectures, combined with high energy material such as tin, offer exciting new battery features including high energy capacity and good cycle lives, which provide the ability to power practical devices,” stated William King, a professor of mechanical science and engineering, who is a co-author of this work.

To date, chip-based retinal implants have only permitted a rudimentary restoration of vision. However, modifying the electrical signals emitted by the implants could change that. This is the conclusion of the initial published findings of a project sponsored by the Austrian Science Fund FWF, which showed that two specific retinal cell types respond differently to certain electrical signals – an effect that could improve the perception of light-dark contrasts.

“Making the blind really see – that will take some time,” says Frank Rattay of the Institute of Analysis and Scientific Computing at the Vienna University of Technology – TU Wien. “But in the case of certain diseases of the eyes, it is already possible to restore vision, albeit still highly impaired, by means of retinal implants.”

Pulse emitter

To achieve this, microchips implanted in the eye convert light signals into electrical pulses, which then stimulate the cells of the retina. One major problem with this approach is that the various types of cells that respond differently to light stimuli in a healthy eye are all stimulated to the same degree. This greatly reduces the perception of contrast.

“But it might be possible,” Rattay says, “to stimulate one cell type more than the other by means of special electrical pulses, thus enhancing the perception of contrast.”

Within the framework of an FWF project, he and his team have discovered some promising approaches. Together with colleagues Shelley Fried of Harvard Medical School and Eberhard Zrenner of University Hospital Tübingen, he is now corroborating the simulated results with experimental findings.

Simulated & stimulated

With the help of a sophisticated computer simulation of two retinal cell types, Rattay and his team have discovered something very exciting. They found that by selecting specific electrical pulses, different biophysical processes can actually be activated in the two cell types. For example, monophasic stimulation, where the electrical polarity of the signal from the retinal implant does not change, leads to stronger depolarisation in one cell type than in the other.

Depolarization means that the negative charge that prevails in cells switches briefly to a positive charge. This is the mechanism by which signals are propagated along nerves,” Rattay explains. This charge reversal was significantly weaker in the other cell type. In their simulation, the team also found as much as a fourfold difference in the response of calcium concentrations in the two cell types to a monophasic signal.

On and off

“Calcium is an important signal molecule in many cells and plays a key role in information processing. For this reason, we specifically considered calcium concentrations in our simulation by considering the activity of membrane proteins involved in calcium transport,” explains Paul Werginz, a colleague of Rattay and lead author of the recently published paper.

Concretely, the team devised models of two retinal cell types that are designated as ON and OFF cells. ON cells react more strongly when the light is brighter at the centre of their location, while OFF cells react more strongly when the light is more intense at the edges. The two cell types are arranged in the retina in such a way as to greatly enhance contrast. The problem is that instead of light pulses, conventional retinal implants emit electrical pulses that elicit the same biochemical reactions in both cell types. Consequently, contrast perception is greatly reduced. However, Rattay’s work shows that this needn’t be the case.

Shape as a factor

Rattay’s research group also found that the shape of the individual ON and OFF cells affect the way in which the signals are processed. For example, the different length of the two cell types is an important factor. This too, Rattay believes, could be an important finding that might help to significantly improve the performance of future retinal implants by modulating the electrical signals they emit. Rattay and his team are in hot pursuit of this goal in order to develop strategies that will allow many blind people to recognise objects visually.

Frank Rattay is a professor at the Institute of Analysis and Scientific Computing of the Vienna University of Technology, where he heads the Computational Neuroscience and Biomedical Engineering group. For decades he has been publishing internationally recognised work on the generation and optimisation of artificial nerve signals.

Interconnecting transistors and other components in the IC, in the package, on the printed circuit board and at the system and global network level, are where the future limitations in performance, power, latency and cost reside.

BY BILL CHEN, ASE US, Sunnyvale, CA; BILL BOTTOMS, 3MT Solutions, Santa Clara, CA, DAVE ARMSTRONG, Advantest, Fort Collins, CO; and ATSUNOBU ISOBAYASHI, Toshiba Kangawa, Japan.

Heterogeneous Integration refers to the integration of separately manufactured components into a higher level assembly that in the aggregate provides enhanced functionality and improved operating characteristics.

In this definition components should be taken to mean any unit whether individual die, MEMS device, passive component and assembled package or sub‐system that are integrated into a single package. The operating characteristics should also be taken in its broadest meaning including characteristics such as system level cost-of-ownership.

The mission of the ITRS Heterogeneous Integration Focus Team is to provide guidance to industry, academia and government to identify key technical challenges with sufficient lead time that they do not become roadblocks preventing the continued progress in electronics that is essential to the future growth of the industry and the realization of the promise of continued positive impact on mankind. The approach is to identify the require- ments for heterogeneous integration in the electronics industry through 2030, determine the difficult challenges that must be overcome to meet these requirements and, where possible, identify potential solutions.

Background

The environment is rapidly changing and will require revolutionary changes after 50 years where the change was largely evolutionary. The major factors driving the need for change are:

  • We are approaching the end of Moore’s Law scaling.
  • The emergence of 2.5D and 3D integration techniques
  • The emerging world of Internet of Everything will cause explosive growth in the need for connectivity.
  • Mobile devices such as smartphones and tablets are growing rapidly in number and in data communications requirements, driving explosive growth in capacity of the global communications network.
  • Migration of data, logic and applications to the cloud drives demand for reduction in latency while accommodating this network capacity growth.

Satisfying these emerging demands cannot be accomplished with the current electronics technology and these demands are driving a new and different integration approach. The requirements for power, latency, bandwidth/bandwidth density and cost can only be accomplished by a revolutionary change in the global communications network, with all the components in that network and everything attached to it. Ensuring the reliability of this “future network” in an environment where transistors wear out, will also require innovation in how we design and test the network and its components.

The transistors ‘power consumption in today’s network account for less than 10 percent of total power, total latency and total cost. It is the interconnection of these transistors and other components in the IC, in the package, on the printed circuit board and at the system and global network level, where the future limitations in performance, power, latency and cost reside. Overcoming these limitations will require heterogeneous integration of different materials, different devices (logic, memory, sensors, RF, analog, etc.) and different technologies (electronics, photonics, plasmonics, MEMS and sensors). New materials, manufacturing equipment and processes will be required to accomplish this integration and overcome these limitations.

Difficult challenges

The top‐level difficult challenges will be the reduction of power per function, cost per function and latency while continuing the improvements in performance, physical density and reliability. Historically, scaling of transistors has been the primary contributor to meeting required system level improvements. Heterogeneous integration must provide solutions to the non‐transistor infrastructure that replace the shortfall from the historical pace of progress we have enjoyed from scaling CMOS. Packaging and test have found it difficult to scale their performance or cost per function to keep pace with transistors and many difficult challenges must be met to maintain the historical pace of progress.

In order to identify the difficult challenges we have selected seven application areas that will drive critical future requirements to focus our work. These areas are:

  • Mobile products
  • Big data systems and interconnect
  • The cloud
  • Biomedical products
  • Green technology
  • Internet of Things
  • Automotive components and systems

An initial list of difficult challenges for Heterogeneous Integration for these application areas is presented in three categories; (1) on‐chip interconnect, (2) assembly and packaging and (3) test. These are analyzed in line with the roadmapping process and will be used to define the top 10 challenges that have the potential to be “show stoppers” for the seven application areas identified above.

On-chip interconnect difficult challenges

The continued decrease in feature size, increase in transistor count and expansion into 3D structures are presenting many difficult challenges. While challenges in continuous scaling are discussed in the “More Moore” section, the difficult challenges of interconnect technology in devices with 3D structures are listed here. Note that this assumes a 3D structure with TSV, optical interconnects and passive devices in interposer substrates.

ESD (Electrostatic Discharge): Plasma damage on transistors by TSV etching especially on via last scheme. Low damage TSV etch process and the layout of protection diodes are the key factors.

CPI (Chip Package Interaction) Reliability [Process]: Low fracture toughness of ULK (Ultra Low‐k) dielectrics cause failures such as delamination. Material development of ULK with higher modulus and hardness are the key factors.

CPI (Chip Package Interaction) Reliability [Design]: A layout optimization is a key for the device using Cu/ULK structure.

Stress management in TSV [Via Last]: Yield and reliability in Mx layers where TSV land is a concern.

Stress management in TSV [Via Middle]: Stress deformation by copper extrusion in TSV and a KOZ (Keep Out Zone) in transistor layout are the issues.

Thermal management [Hot Spot]: Heat dissipation in TSV is an issue. An effective homogenization of hot spot heat either by material or layout optimization are the key factors.

Thermal management [Warpage]: Thermal expansion management of each interconnect layer is necessary in thinner Si substrate with TSV.

Passive Device Integration [Performance]: Higher Q, in other words, thicker metal lines and lower tan dielectrics is a key for achieving lower power and lower noise circuits.

Passive Device Integration [Cost]: Higher film and higher are required for higher density and lower footprint layout.

Implementation of Optical Interconnects: Optical interconnects for signaling, clock distribution, and I/O requires development of a number of optical components such as light sources, photo detectors, modulators, filters and waveguides. On‐chip optical interconnects replacing global inter- connects requires the breakthrough to overcome the cost issue.

Assembly and packaging difficult challenges

Today assembly and packaging are often the limiting factors in performance, size, latency, power and cost. Although much progress has been made with the introduction of new packaging architectures and processes, with innovations in wafer level packaging and system in package for example, a significantly higher rate of progress is required. The complexity of the challenge is increasing due to unique demands of heterogeneous integration. This includes integration of diverse materials and diverse circuit fabric types into a single SiP architecture and the use of the 3rd dimension.

Difficult packaging challenges by circuit fabric

  • Logic: Unpredictable hot spot locations, high thermal density, high frequency, unpredictable work load, limited by data bandwidth and data bottle‐necks. High bandwidth data access will require new solutions to physical density of bandwidth.
  • Memory: Thermal density depends on memory type and thermal density differences drive changes in package architecture and materials, thinned device fault models, test & redundancy repair techniques. Packaging must support low latency, high bandwidth large (>1Tb) memory in a hierar- chical architecture in a single package and/or SiP).
  • MEMS: There is a virtually unlimited set of requirements. Issues to be addressed include hermetic vs. non‐hermetic, variable functional density, plumbing, stress control, and cost effective test solutions.
  • Photonics: Extreme sensitivity to thermal changes, O to E and E to O, optical signal connections, new materials, new assembly techniques, new alignment and test techniques.
  • Plasmonics: Requirements are yet to be determined, but they will be different from other circuit type. Issues to be addressed include acousto‐ magneto effects and nonlinear plasmonics.
  • Microfluidics: Sealing, thermal management and flow control must be incorporated into the package.

Most if not all of these will require new materials and new equipment for assembly and test to meet the 15 year Roadmap requirements.

Difficult packaging challenges by material

Semiconductors: Today the vast majority of semiconductor components are silicon based. In the future both organic and compound semiconductors will be used with a variety of thermal, mechanical and electrical properties; each with unique mechanical, thermal and electrical requirements.

Conductors: Cu has replaced Au and Al in many applications but this is not good enough for future needs. Metal matrix composites and ballistic conductors will be required. Inserting some of these new materials will require new assembly, contacting and joining techniques.

Dielectrics: New high k dielectrics and low k dielectrics will be required. Fracture toughness and interfacial adhesion will be the key parameters. Packaging must provide protection for these fragile materials.

Molding compound: Improved thermal conductivity, thinner layers and lower CTE are key requirements.

Adhesives: Die attach materials, flexible conductors, residue free materials needed o not exist today.

Biocompatible materials: For applications in the healthcare and medical domain (e.g. body patches, implants, smart catheters, electroceuticals), semiconductor‐based devices have to be biocompatible. This involves the integration of new (flexible) materials to comply with specific packaging (form factor) requirements.

Difficult challenges for the testing of heterogeneous devices

The difficulties in testing heterogeneous devices can be broadly separated into three categories: Test Quality Assurance, Test Infrastructure, and Test Design Collaboration.

Test quality assurance needs to comprehend and place achievable quality and reliability metrics for each individual component prior to integration, in order to meet the heterogeneous system quality and reliability targets. Assembly and test flows will become inter- twined and interdependent. They need to be constructed in a manner that maintains a cost effective yield loss versus component cost balance and proper component fault isolation and quantification. The industry will be required to integrate components that cannot guarantee KGD without insurmountable cost penalties and this will require integrator visible and accessible repair mechanisms.

Test infrastructure hardware needs to comprehend multiple configurations of the same device to enable test point insertion at partially assembled and fully assembled states. This includes but is not limited to different component heights, asymmetric component locations, and exposed metal contacts (including ESD challenges). Test infrastructure software needs to enable storing and using volume test data for multiple components that may or may not have been generated within the final integrators data domains but are critical for the final heterogeneous system functionality and quality. It also needs to enable methods for highly granular component tracking for subsequent joint supplier and integrator failure analysis and debug.

Test design collaboration is one of the biggest challenges that the industry will need to overcome. It will be a requirement for heterogeneous highly integrated highly functional systems to have test features co‐designed across component boundaries that have more test coverage and debug capability than simple boundary scans. The challenge
of breaking up what was once the responsibility of a wholly contained design for test team across multiple independent entities each trying to protect IP, is only magnified by the additional requirement that the jointly developed test solutions will need to be standardized across multiple competing heterogeneous integrators. Industry wide collaboration on and adherence to test standards will be required in order to maintain cost and time effective design cycles for highly desired components that traditionally has only been required for cross component boundary communication protocols.

The roadmapping process

The objective of ITRS 2.0 for heterogeneous integration is to focus on a limited number of key challenges (10) that have the greatest potential to be “show stoppers,” while leaving other challenges identified and listed but without focus on detailed technical challenges and potential solutions. In this process collaboration with other Focus Teams and Technical Working Groups will be a critical resource. While we will need collaboration with other groups both inside and outside the ITRS some of the collaborations are critical for HI to address its mission. FIGURE 1 shows the major internal collaborations in three categories.

FIGURE 1. Collaboration priorities.

FIGURE 1. Collaboration priorities.

We expect to review these key challenges and our list of other challenges on a yearly basis and make changes so that our focus keeps up with changes in the key challenges. This will ensure that our efforts remain focused on the pre‐competitive technologies that have the greatest future value to our audience. There are four phases in the process detailed below.

1. Identify challenges for application areas: The process would involve collaboration with other focus teams, technical TWGs and other roadmapping groups casting a wide net to identify all gaps and challenges associated with the seven selected application areas as modified from time to time. This list of challenges will be large (perhaps hundreds) and they will be scored by the HI team by difficulty and criticality.

2. Define potential solutions: Using the scoring in phase (1) a number (30‐40) will be selected to identify potential solutions. The remainder will be archived for the next cycle of this process. This work will be coordinated with the same collabo- ration process defined above. These potential solutions will be scored by probable success and cost.

3. Down select to only the 10 most critical challenges: The potential solutions with the lowest probability of success and highest cost will have the potential to be “show stopping” roadblocks. These will be selected using the scoring above and the focus issues for the HI roadmap. The results of this selection process will be commu- nicated to the relevant collaboration partners for their comments.

4. Develop a roadmap of potential solutions for “show stoppers”: The roadmap developed for the “show stopping” roadblocks shall include analysis of the blocking issue and identification of a number of potential solutions. The collaboration shall include detail work with other units of the ITRS, other roadmapping activity such as the Jisso Roadmap, iNEMI Roadmap, Communications Technology Roadmap from MIT. We are continuing to work with the global technical community: industry, research institutes and academia, including the IEEE CPMT Society.

The blocking issues will be specifically investigated by the leading experts within the ITRS structure, academia, industry, government and research organizations to ensure a broad based understanding. Potential solutions will be identified through a similar collaboration process and evaluated through a series of focused workshops similar to the process used by the ERD iTWG. This process is a workshop where there is one proponents and one critic presenting to the group. This is followed by a discussion and a voting process which may have several iterations to reach a consensus.

The cross Focus Team/TWG collaboration will use a procedure of iteration to converge on an understanding of the challenges and potential solutions that is self‐ consistent across the ITRS structure. An example is illustrated in FIGURE 2.

FIGURE 2. Iterative collaboration process

FIGURE 2. Iterative collaboration process

It is critically important that our time horizon include the full 15 years of the ITRS. The work to anticipate the true roadblocks for heterogeneous integration, define potential solutions and implement a successful solution may require the full 15 years. Among the tables we will include 5 year check points of the major challenges for the key issues of cost, power, latency and bandwidth. In order for this table to be useful we will face the challenge of identifying the specific metric or metrics to be used for each application driver as we prepare the Heterogeneous Integration roadmap chapter for 2015 and beyond.

BILL CHEN is a senior technical advisor for ASE US, Sunnyvale, CA; BILL BOTTOMS is President and CEO of 3MT Solutions, Santa Clara, CA, DAVE ARMSTRONG is director of business development at Advantest, Fort Collins, CO; and ATSUNOBU ISOBAYASHI works in the Toshiba’s Center for Semiconductor Research & Development, Kangawa, Japan.