Category Archives: MEMS

Researchers at the University of California, Riverside Bourns College of Engineering and the Russian Academy of Sciences have successfully demonstrated pattern recognition using a magnonic holographic memory device, a development that could greatly improve speech and image recognition hardware.

Pattern recognition focuses on finding patterns and regularities in data. The uniqueness of the demonstrated work is that the input patterns are encoded into the phases of the input spin waves.

Clockwise are: photo of the prototype device; schematic of the eight-terminal magnonic holographic memory prototype; and a collection of experimental data obtained for two magnonic matrixes. Credit: UC Riverside

Clockwise are: photo of the prototype device; schematic of the eight-terminal magnonic holographic memory prototype; and a collection of experimental data obtained for two magnonic matrixes.
Credit: UC Riverside

Spin waves are collective oscillations of spins in magnetic materials. Spin wave devices are advantageous over their optical counterparts because they are more scalable due to a shorter wavelength. Also, spin wave devices are compatible with conventional electronic devices and can be integrated within a chip.

The researchers built a prototype eight-terminal device consisting of a magnetic matrix with micro-antennas to excite and detect the spin waves. Experimental data they collected for several magnonic matrixes show unique output signatures correspond to specific phase patterns. The microantennas allow the researchers to generate and recognize any input phase pattern, a big advantage over existing practices.

Then spin waves propagate through the magnetic matrix and interfere. Some of the input phase patterns produce high output voltage, and other combinations results in a low output voltage, where “high” and “low” are defined regarding the reference voltage (i.e. output is high if the output voltage is higher than 1 millivolt, and low if the voltage is less than 1 millivolt.

It takes about 100 nanoseconds for recognition, which is the time required for spin waves to propagate and to create the interference pattern.

The most appealing property of this approach is that all of the input ports operate in parallel. It takes the same amount of time to recognize patterns (numbers) from 0 to 999, and from 0 to 10,000,000. Potentially, magnonic holographic devices can be fundamentally more efficient than conventional digital circuits.

The work builds upon findings published last year by the researchers, who showed a 2-bit magnonic holographic memory device can recognize the internal magnetic memory states via spin wave superposition. That work was recognized as a top 10 physics breakthrough by Physics World magazine.

“We were excited by that recognition, but the latest research takes this to a new level,” said Alex Khitun, a research professor at UC Riverside, who is the lead researcher on the project. “Now, the device works not only as a memory but also a logic element.”

The latest findings were published in a paper called “Pattern recognition with magnonic holographic memory device” in the journal Applied Physics Letters. In addition to Khitun, authors are Frederick Gertz, a graduate student who works with Khitun at UC Riverside, and A. Kozhevnikov, Y. Filimonov and G. Dudko, all from the Russian Academy of Sciences.

Holography is a technique based on the wave nature of light which allows the use of wave interference between the object beam and the coherent background. It is commonly associated with images being made from light, such as on driver’s licenses or paper currency. However, this is only a narrow field of holography.

Holography has been also recognized as a future data storing technology with unprecedented data storage capacity and ability to write and read a large number of data in a highly parallel manner.

The main challenge associated with magnonic holographic memory is the scaling of the operational wavelength, which requires the development of sub-micrometer scale elements for spin wave generation and detection.

By combining 3D holographic lithography and 2D photolithography, researchers from the University of Illinois at Urbana-Champaign have demonstrated a high-performance 3D microbattery suitable for large-scale on-chip integration with microelectronic devices.

“This 3D microbattery has exceptional performance and scalability, and we think it will be of importance for many applications,” explained Paul Braun, a professor of materials science and engineering at Illinois. “Micro-scale devices typically utilize power supplied off-chip because of difficulties in miniaturizing energy storage technologies. A miniaturized high-energy and high-power on-chip battery would be highly desirable for applications including autonomous microscale actuators, distributed wireless sensors and transmitters, monitors, and portable and implantable medical devices.”

CREDIT: University of Illinois

CREDIT: University of Illinois

“Due to the complexity of 3D electrodes, it is generally difficult to realize such batteries, let alone the possibility of on-chip integration and scaling. In this project, we developed an effective method to make high-performance 3D lithium-ion microbatteries using processes that are highly compatible with the fabrication of microelectronics,” stated Hailong Ning, a graduate student in the Department of Materials Science and Engineering and first author of the article, “Holographic Patterning of High Performance on-chip 3D Lithium-ion Microbatteries,” appearing in Proceedings of the National Academy of Sciences.

“We utilized 3D holographic lithography to define the interior structure of electrodes and 2D photolithography to create the desired electrode shape.” Ning added. “This work merges important concepts in fabrication, characterization, and modeling, showing that the energy and power of the microbattery are strongly related to the structural parameters of the electrodes such as size, shape, surface area, porosity, and tortuosity. A significant strength of this new method is that these parameters can be easily controlled during lithography steps, which offers unique flexibility for designing next-generation on-chip energy storage devices.”

Enabled by a 3D holographic patterning technique–where multiple optical beams interfere inside the photoresist creating a desirable 3D structure–the battery possesses well-defined, periodically structured porous electrodes, that facilitates the fast transports of electrons and ions inside the battery, offering supercapacitor-like power.

“Although accurate control on the interfering optical beams is required to construct 3D holographic lithography, recent advances have significantly simplified the required optics, enabling creation of structures via a single incident beam and standard photoresist processing. This makes it highly scalable and compatible with microfabrication,” stated John Rogers, a professor of materials science and engineering, who has worked with Braun and his team to develop the technology.

“Micro-engineered battery architectures, combined with high energy material such as tin, offer exciting new battery features including high energy capacity and good cycle lives, which provide the ability to power practical devices,” stated William King, a professor of mechanical science and engineering, who is a co-author of this work.

To date, chip-based retinal implants have only permitted a rudimentary restoration of vision. However, modifying the electrical signals emitted by the implants could change that. This is the conclusion of the initial published findings of a project sponsored by the Austrian Science Fund FWF, which showed that two specific retinal cell types respond differently to certain electrical signals – an effect that could improve the perception of light-dark contrasts.

“Making the blind really see – that will take some time,” says Frank Rattay of the Institute of Analysis and Scientific Computing at the Vienna University of Technology – TU Wien. “But in the case of certain diseases of the eyes, it is already possible to restore vision, albeit still highly impaired, by means of retinal implants.”

Pulse emitter

To achieve this, microchips implanted in the eye convert light signals into electrical pulses, which then stimulate the cells of the retina. One major problem with this approach is that the various types of cells that respond differently to light stimuli in a healthy eye are all stimulated to the same degree. This greatly reduces the perception of contrast.

“But it might be possible,” Rattay says, “to stimulate one cell type more than the other by means of special electrical pulses, thus enhancing the perception of contrast.”

Within the framework of an FWF project, he and his team have discovered some promising approaches. Together with colleagues Shelley Fried of Harvard Medical School and Eberhard Zrenner of University Hospital Tübingen, he is now corroborating the simulated results with experimental findings.

Simulated & stimulated

With the help of a sophisticated computer simulation of two retinal cell types, Rattay and his team have discovered something very exciting. They found that by selecting specific electrical pulses, different biophysical processes can actually be activated in the two cell types. For example, monophasic stimulation, where the electrical polarity of the signal from the retinal implant does not change, leads to stronger depolarisation in one cell type than in the other.

Depolarization means that the negative charge that prevails in cells switches briefly to a positive charge. This is the mechanism by which signals are propagated along nerves,” Rattay explains. This charge reversal was significantly weaker in the other cell type. In their simulation, the team also found as much as a fourfold difference in the response of calcium concentrations in the two cell types to a monophasic signal.

On and off

“Calcium is an important signal molecule in many cells and plays a key role in information processing. For this reason, we specifically considered calcium concentrations in our simulation by considering the activity of membrane proteins involved in calcium transport,” explains Paul Werginz, a colleague of Rattay and lead author of the recently published paper.

Concretely, the team devised models of two retinal cell types that are designated as ON and OFF cells. ON cells react more strongly when the light is brighter at the centre of their location, while OFF cells react more strongly when the light is more intense at the edges. The two cell types are arranged in the retina in such a way as to greatly enhance contrast. The problem is that instead of light pulses, conventional retinal implants emit electrical pulses that elicit the same biochemical reactions in both cell types. Consequently, contrast perception is greatly reduced. However, Rattay’s work shows that this needn’t be the case.

Shape as a factor

Rattay’s research group also found that the shape of the individual ON and OFF cells affect the way in which the signals are processed. For example, the different length of the two cell types is an important factor. This too, Rattay believes, could be an important finding that might help to significantly improve the performance of future retinal implants by modulating the electrical signals they emit. Rattay and his team are in hot pursuit of this goal in order to develop strategies that will allow many blind people to recognise objects visually.

Frank Rattay is a professor at the Institute of Analysis and Scientific Computing of the Vienna University of Technology, where he heads the Computational Neuroscience and Biomedical Engineering group. For decades he has been publishing internationally recognised work on the generation and optimisation of artificial nerve signals.

SEMI has announced that executives from MEMS giants Bosch and STMicroelectronics, MEMS largest fabless Invensense and dominating IC foundry TSMC will be delivering the keynote talks at the European MEMS Summit (Sept 17-18, 2015 – Milan, Italy).

For the first installment of SEMI’s European MEMS Summit, themed “Sensing the Planet, MEMS for Life,” Stefan Finkbeiner, GM and CEO of Bosch Sensortec, Benedetto Vigna, Executive VP and General Manager of the Analog, MEMS & Sensors group of STMicroelectronics, Behrooz Abdi, CEO and President of Invensense, and Maria Marced, President of TSMC Europe will join SEMI to share their vision of the current challenges facing the MEMS industry and their recipes for success. With these headliners, SEMI’s European MEMS Summit promises to be a powerhouse of MEMS experts, both from a technological standpoint and from a business standpoint.

“We are very excited to offer attendees a high-profile collection of international speakers for this first edition of our European MEMS Summit,” commented Yann Guillou, business development manager at SEMI. “Elaborated with the support of industry representatives, we have made an effort to address the most crucial industry issues with the belief that this conference program will be a positive contribution to the MEMS industry and will help MEMS actors collectively shape their industry’s progress. Above all, our hope is that attendees will leave the Summit with a better understanding of the crucial technological and business challenges faced by the MEMS value chain as well as an idea of the solutions that are being proposed today to address those problems.”

The Summit’s conference will bring together a diversity of high caliber MEMS technology experts, including representatives of ARM, ASE Group, CEA-Leti, Freescale, IHS, Infineon, SITRI, Tronics Microsystems, X-FAB, Yole Développement and more. The event will insist on the importance of understanding the dynamics of the marketplace in perpetuating a global comprehension of the evolution of MEMS. Speakers will provide their outlooks on the MEMS market, their expectations for future marketplace trends and their assessment of the changes in business models, the supply chain, and the ecosystem. One full day of the event will be dedicated to “Applications” to give attendees a more global vision of how MEMS are being applied in the automotive, consumer electronics, wearable and industrial sector as well as the importance of MEMS in the growth of the Internet of Things. Despite a strong focus on business-related aspects, technology will not be forgotten; speakers will address topics such as new detection principles, innovation in materials, new packaging solutions, MEMS on 300mm wafers and more.

The Summit will be held at the grandiose Palazzo Lombardia, in Milan, Italy. At the heart of the Palazzo and in complement to the conference, SEMI will organize a MEMS Exhibition, giving companies with MEMS activities a chance to reach out to other participants who are coming from the same sector. The European MEMS Summit will include numerous networking opportunities – a gala dinner, a networking cocktail hour and numerous coffee and lunch breaks.

For any question regarding the event, contact Yann Guillou from SEMI ([email protected]).

BY GREG SHUTTLEWORTH, Global Product Manager at LINDE ELECTRONICS

The market expectations of modern electronics technology are changing the landscape in terms of performance and, in particular, power consumption, and new innovations are putting unprecedented demands on semiconductor devices. Internet of Things devices, for example, largely depend on a range of different sensors, and will require new architectures to handle the unprecedented levels of data and operations running through their slight form factors.

The continued shrinkage of semiconductor dimensions and the matching decreases in microchip size have corresponded to the principles of Moore’s Law with an uncanny reliability since the idea’s coining in 1965. However, the curtain is now closing on the era of predictable / conventional size reduction due to physical and material limitations.

Thus, in order to continue to deliver increased performance at lower costs and with a smaller footprint, different approaches are being explored. Companies can already combine multiple functions on a single chip–memory and logic devices, for example–or an Internet of Things device running multiple types of sensor through a single chip.

We have always known that we’d reach a point where conventional shrinking of semiconductor dimensions would begin to lose its effect, but now we are starting to tackle it head on. A leading U.S. semiconductor manufacturer got the ball rolling with their FinFET (or tri–gate) design in 2012 with its 3D transistors allowing designs that minimize current leakage; other companies look set to bring their own 3D chips to market.

At the same time, there’s a great deal of experimentation with a range of other approaches to semiconductor redesign. Memory device manufacturers, for instance, are looking to stack memory cells vertically on top of each other in order to make the most of a microchip’s limited space. Others, meanwhile, are examining the materials in the hope of using new, more efficient silicon–like materials in their chips.

Regardless of the approach taken, however, this step change in microchip creation means new material demands from chip makers and new manufacturing techniques to go with them.

The semiconductor industry has traditionally had to add new materials and process techniques to enhance the performance of the basic silicon building blocks with tungsten plugs, copper wiring / CMP, high–k metal gates, for example. Now, however, it is beginning to become impossible to extend conventional materials to meet the performance requirements. Germanium is already added to Si to introduce strain, but its high electron mobility means Germanium is also likely to become the material of the Fin itself and will be complemented by a corresponding Fin made of III–V material, in effect integrating three semiconductor materials into a single device.

Further innovation is required in the areas of lithography and etch. This is due to the delay in production suitability of the EUV lithography system proposed to print the very fine structures required for future technology nodes. Complex multi-patterning schemes using conventional lithography are already underway to compensate for this technology delay, requiring the use of carbon hard masks and the introduction of gases such as acetylene, propylene and carbonyl sulphide to the semiconductor fab. Printing the features is only half of the challenge; the structures also need to be etched. The introduction of new materials always presents some etch challenges as all materials etch at slightly different rates and the move to 3D structures, where very deep and narrow features need to be defined through a stack of different materials, will be a particularly difficult challenge to meet.

The microchip industry has continuously evolved to deliver amazing technological advances, but we are now seeing the start of a revolution in microchip design and manufacturing. The revolution will be slow but steady. Such is the pattern of the microchip industry, but it will need a succession of new materials at the ready, and, at Linde, we’re prepared to make sure the innovators have everything they need.

Interconnecting transistors and other components in the IC, in the package, on the printed circuit board and at the system and global network level, are where the future limitations in performance, power, latency and cost reside.

BY BILL CHEN, ASE US, Sunnyvale, CA; BILL BOTTOMS, 3MT Solutions, Santa Clara, CA, DAVE ARMSTRONG, Advantest, Fort Collins, CO; and ATSUNOBU ISOBAYASHI, Toshiba Kangawa, Japan.

Heterogeneous Integration refers to the integration of separately manufactured components into a higher level assembly that in the aggregate provides enhanced functionality and improved operating characteristics.

In this definition components should be taken to mean any unit whether individual die, MEMS device, passive component and assembled package or sub‐system that are integrated into a single package. The operating characteristics should also be taken in its broadest meaning including characteristics such as system level cost-of-ownership.

The mission of the ITRS Heterogeneous Integration Focus Team is to provide guidance to industry, academia and government to identify key technical challenges with sufficient lead time that they do not become roadblocks preventing the continued progress in electronics that is essential to the future growth of the industry and the realization of the promise of continued positive impact on mankind. The approach is to identify the require- ments for heterogeneous integration in the electronics industry through 2030, determine the difficult challenges that must be overcome to meet these requirements and, where possible, identify potential solutions.

Background

The environment is rapidly changing and will require revolutionary changes after 50 years where the change was largely evolutionary. The major factors driving the need for change are:

  • We are approaching the end of Moore’s Law scaling.
  • The emergence of 2.5D and 3D integration techniques
  • The emerging world of Internet of Everything will cause explosive growth in the need for connectivity.
  • Mobile devices such as smartphones and tablets are growing rapidly in number and in data communications requirements, driving explosive growth in capacity of the global communications network.
  • Migration of data, logic and applications to the cloud drives demand for reduction in latency while accommodating this network capacity growth.

Satisfying these emerging demands cannot be accomplished with the current electronics technology and these demands are driving a new and different integration approach. The requirements for power, latency, bandwidth/bandwidth density and cost can only be accomplished by a revolutionary change in the global communications network, with all the components in that network and everything attached to it. Ensuring the reliability of this “future network” in an environment where transistors wear out, will also require innovation in how we design and test the network and its components.

The transistors ‘power consumption in today’s network account for less than 10 percent of total power, total latency and total cost. It is the interconnection of these transistors and other components in the IC, in the package, on the printed circuit board and at the system and global network level, where the future limitations in performance, power, latency and cost reside. Overcoming these limitations will require heterogeneous integration of different materials, different devices (logic, memory, sensors, RF, analog, etc.) and different technologies (electronics, photonics, plasmonics, MEMS and sensors). New materials, manufacturing equipment and processes will be required to accomplish this integration and overcome these limitations.

Difficult challenges

The top‐level difficult challenges will be the reduction of power per function, cost per function and latency while continuing the improvements in performance, physical density and reliability. Historically, scaling of transistors has been the primary contributor to meeting required system level improvements. Heterogeneous integration must provide solutions to the non‐transistor infrastructure that replace the shortfall from the historical pace of progress we have enjoyed from scaling CMOS. Packaging and test have found it difficult to scale their performance or cost per function to keep pace with transistors and many difficult challenges must be met to maintain the historical pace of progress.

In order to identify the difficult challenges we have selected seven application areas that will drive critical future requirements to focus our work. These areas are:

  • Mobile products
  • Big data systems and interconnect
  • The cloud
  • Biomedical products
  • Green technology
  • Internet of Things
  • Automotive components and systems

An initial list of difficult challenges for Heterogeneous Integration for these application areas is presented in three categories; (1) on‐chip interconnect, (2) assembly and packaging and (3) test. These are analyzed in line with the roadmapping process and will be used to define the top 10 challenges that have the potential to be “show stoppers” for the seven application areas identified above.

On-chip interconnect difficult challenges

The continued decrease in feature size, increase in transistor count and expansion into 3D structures are presenting many difficult challenges. While challenges in continuous scaling are discussed in the “More Moore” section, the difficult challenges of interconnect technology in devices with 3D structures are listed here. Note that this assumes a 3D structure with TSV, optical interconnects and passive devices in interposer substrates.

ESD (Electrostatic Discharge): Plasma damage on transistors by TSV etching especially on via last scheme. Low damage TSV etch process and the layout of protection diodes are the key factors.

CPI (Chip Package Interaction) Reliability [Process]: Low fracture toughness of ULK (Ultra Low‐k) dielectrics cause failures such as delamination. Material development of ULK with higher modulus and hardness are the key factors.

CPI (Chip Package Interaction) Reliability [Design]: A layout optimization is a key for the device using Cu/ULK structure.

Stress management in TSV [Via Last]: Yield and reliability in Mx layers where TSV land is a concern.

Stress management in TSV [Via Middle]: Stress deformation by copper extrusion in TSV and a KOZ (Keep Out Zone) in transistor layout are the issues.

Thermal management [Hot Spot]: Heat dissipation in TSV is an issue. An effective homogenization of hot spot heat either by material or layout optimization are the key factors.

Thermal management [Warpage]: Thermal expansion management of each interconnect layer is necessary in thinner Si substrate with TSV.

Passive Device Integration [Performance]: Higher Q, in other words, thicker metal lines and lower tan dielectrics is a key for achieving lower power and lower noise circuits.

Passive Device Integration [Cost]: Higher film and higher are required for higher density and lower footprint layout.

Implementation of Optical Interconnects: Optical interconnects for signaling, clock distribution, and I/O requires development of a number of optical components such as light sources, photo detectors, modulators, filters and waveguides. On‐chip optical interconnects replacing global inter- connects requires the breakthrough to overcome the cost issue.

Assembly and packaging difficult challenges

Today assembly and packaging are often the limiting factors in performance, size, latency, power and cost. Although much progress has been made with the introduction of new packaging architectures and processes, with innovations in wafer level packaging and system in package for example, a significantly higher rate of progress is required. The complexity of the challenge is increasing due to unique demands of heterogeneous integration. This includes integration of diverse materials and diverse circuit fabric types into a single SiP architecture and the use of the 3rd dimension.

Difficult packaging challenges by circuit fabric

  • Logic: Unpredictable hot spot locations, high thermal density, high frequency, unpredictable work load, limited by data bandwidth and data bottle‐necks. High bandwidth data access will require new solutions to physical density of bandwidth.
  • Memory: Thermal density depends on memory type and thermal density differences drive changes in package architecture and materials, thinned device fault models, test & redundancy repair techniques. Packaging must support low latency, high bandwidth large (>1Tb) memory in a hierar- chical architecture in a single package and/or SiP).
  • MEMS: There is a virtually unlimited set of requirements. Issues to be addressed include hermetic vs. non‐hermetic, variable functional density, plumbing, stress control, and cost effective test solutions.
  • Photonics: Extreme sensitivity to thermal changes, O to E and E to O, optical signal connections, new materials, new assembly techniques, new alignment and test techniques.
  • Plasmonics: Requirements are yet to be determined, but they will be different from other circuit type. Issues to be addressed include acousto‐ magneto effects and nonlinear plasmonics.
  • Microfluidics: Sealing, thermal management and flow control must be incorporated into the package.

Most if not all of these will require new materials and new equipment for assembly and test to meet the 15 year Roadmap requirements.

Difficult packaging challenges by material

Semiconductors: Today the vast majority of semiconductor components are silicon based. In the future both organic and compound semiconductors will be used with a variety of thermal, mechanical and electrical properties; each with unique mechanical, thermal and electrical requirements.

Conductors: Cu has replaced Au and Al in many applications but this is not good enough for future needs. Metal matrix composites and ballistic conductors will be required. Inserting some of these new materials will require new assembly, contacting and joining techniques.

Dielectrics: New high k dielectrics and low k dielectrics will be required. Fracture toughness and interfacial adhesion will be the key parameters. Packaging must provide protection for these fragile materials.

Molding compound: Improved thermal conductivity, thinner layers and lower CTE are key requirements.

Adhesives: Die attach materials, flexible conductors, residue free materials needed o not exist today.

Biocompatible materials: For applications in the healthcare and medical domain (e.g. body patches, implants, smart catheters, electroceuticals), semiconductor‐based devices have to be biocompatible. This involves the integration of new (flexible) materials to comply with specific packaging (form factor) requirements.

Difficult challenges for the testing of heterogeneous devices

The difficulties in testing heterogeneous devices can be broadly separated into three categories: Test Quality Assurance, Test Infrastructure, and Test Design Collaboration.

Test quality assurance needs to comprehend and place achievable quality and reliability metrics for each individual component prior to integration, in order to meet the heterogeneous system quality and reliability targets. Assembly and test flows will become inter- twined and interdependent. They need to be constructed in a manner that maintains a cost effective yield loss versus component cost balance and proper component fault isolation and quantification. The industry will be required to integrate components that cannot guarantee KGD without insurmountable cost penalties and this will require integrator visible and accessible repair mechanisms.

Test infrastructure hardware needs to comprehend multiple configurations of the same device to enable test point insertion at partially assembled and fully assembled states. This includes but is not limited to different component heights, asymmetric component locations, and exposed metal contacts (including ESD challenges). Test infrastructure software needs to enable storing and using volume test data for multiple components that may or may not have been generated within the final integrators data domains but are critical for the final heterogeneous system functionality and quality. It also needs to enable methods for highly granular component tracking for subsequent joint supplier and integrator failure analysis and debug.

Test design collaboration is one of the biggest challenges that the industry will need to overcome. It will be a requirement for heterogeneous highly integrated highly functional systems to have test features co‐designed across component boundaries that have more test coverage and debug capability than simple boundary scans. The challenge
of breaking up what was once the responsibility of a wholly contained design for test team across multiple independent entities each trying to protect IP, is only magnified by the additional requirement that the jointly developed test solutions will need to be standardized across multiple competing heterogeneous integrators. Industry wide collaboration on and adherence to test standards will be required in order to maintain cost and time effective design cycles for highly desired components that traditionally has only been required for cross component boundary communication protocols.

The roadmapping process

The objective of ITRS 2.0 for heterogeneous integration is to focus on a limited number of key challenges (10) that have the greatest potential to be “show stoppers,” while leaving other challenges identified and listed but without focus on detailed technical challenges and potential solutions. In this process collaboration with other Focus Teams and Technical Working Groups will be a critical resource. While we will need collaboration with other groups both inside and outside the ITRS some of the collaborations are critical for HI to address its mission. FIGURE 1 shows the major internal collaborations in three categories.

FIGURE 1. Collaboration priorities.

FIGURE 1. Collaboration priorities.

We expect to review these key challenges and our list of other challenges on a yearly basis and make changes so that our focus keeps up with changes in the key challenges. This will ensure that our efforts remain focused on the pre‐competitive technologies that have the greatest future value to our audience. There are four phases in the process detailed below.

1. Identify challenges for application areas: The process would involve collaboration with other focus teams, technical TWGs and other roadmapping groups casting a wide net to identify all gaps and challenges associated with the seven selected application areas as modified from time to time. This list of challenges will be large (perhaps hundreds) and they will be scored by the HI team by difficulty and criticality.

2. Define potential solutions: Using the scoring in phase (1) a number (30‐40) will be selected to identify potential solutions. The remainder will be archived for the next cycle of this process. This work will be coordinated with the same collabo- ration process defined above. These potential solutions will be scored by probable success and cost.

3. Down select to only the 10 most critical challenges: The potential solutions with the lowest probability of success and highest cost will have the potential to be “show stopping” roadblocks. These will be selected using the scoring above and the focus issues for the HI roadmap. The results of this selection process will be commu- nicated to the relevant collaboration partners for their comments.

4. Develop a roadmap of potential solutions for “show stoppers”: The roadmap developed for the “show stopping” roadblocks shall include analysis of the blocking issue and identification of a number of potential solutions. The collaboration shall include detail work with other units of the ITRS, other roadmapping activity such as the Jisso Roadmap, iNEMI Roadmap, Communications Technology Roadmap from MIT. We are continuing to work with the global technical community: industry, research institutes and academia, including the IEEE CPMT Society.

The blocking issues will be specifically investigated by the leading experts within the ITRS structure, academia, industry, government and research organizations to ensure a broad based understanding. Potential solutions will be identified through a similar collaboration process and evaluated through a series of focused workshops similar to the process used by the ERD iTWG. This process is a workshop where there is one proponents and one critic presenting to the group. This is followed by a discussion and a voting process which may have several iterations to reach a consensus.

The cross Focus Team/TWG collaboration will use a procedure of iteration to converge on an understanding of the challenges and potential solutions that is self‐ consistent across the ITRS structure. An example is illustrated in FIGURE 2.

FIGURE 2. Iterative collaboration process

FIGURE 2. Iterative collaboration process

It is critically important that our time horizon include the full 15 years of the ITRS. The work to anticipate the true roadblocks for heterogeneous integration, define potential solutions and implement a successful solution may require the full 15 years. Among the tables we will include 5 year check points of the major challenges for the key issues of cost, power, latency and bandwidth. In order for this table to be useful we will face the challenge of identifying the specific metric or metrics to be used for each application driver as we prepare the Heterogeneous Integration roadmap chapter for 2015 and beyond.

BILL CHEN is a senior technical advisor for ASE US, Sunnyvale, CA; BILL BOTTOMS is President and CEO of 3MT Solutions, Santa Clara, CA, DAVE ARMSTRONG is director of business development at Advantest, Fort Collins, CO; and ATSUNOBU ISOBAYASHI works in the Toshiba’s Center for Semiconductor Research & Development, Kangawa, Japan.

A new study coauthored by Wellesley economist, Professor Daniel E. Sichel, reveals that innovation in an important technology sector is happening faster than experts had previously thought, creating a backdrop for better economic times ahead.

The Producer Price Index (PPI) of the United States suggests that the prices of semiconductors have barely fallen in recent years. The slow decline in semiconductor prices stands in sharp contrast to the rapidly falling prices reported from the mid-1980s to the early 2000s, and has been interpreted as a signal of sluggish innovation in this key sector.

The apparent slowdown puzzled Sichel and his coauthors, David M. Byrne of the Federal Reserve Board, and Stephen D. Oliner, of the American Enterprise Institute and UCLA–particularly in light of evidence that the performance of microprocessor units (MPUs), which account for about half of U.S. semiconductor shipments, has continued to improve at rapid pace. After closely examining historical pricing data, the economists found that Intel, the leading producer of MPUs, dramatically changed the way it priced these chips in the mid-2000s–roughly the same time when the slowdown reported by government data occurs. Prior to this period, Intel typically lowered the list prices of older chips to remain competitive with newly introduced chips. However, after 2006, Intel began to keep chip prices relatively unchanged over their life cycle, which affected official statistics.

To obtain a more accurate assessment of the pace of innovation in this important sector, Sichel, Byrne, and Oliner developed an alternative method of measurement that evaluates changes in actual MPU performance to gauge the rate of improvement in price-performance ratios. The economists’ preferred index shows that quality-adjusted MPU prices continued to fall rapidly after the mid-2000s, contrary to what the PPI indicates–meaning that worries about a slowdown in this sector are likely unwarranted.

According to Sichel, these results have important implications, not only for understanding the rate of technological progress in the semiconductor industry but also for the broader debate about the pace of innovation in the U.S. economy.

“These findings give us reason to be optimistic,” said Sichel. “If technical change in this part of the economy is still rapid, it provides hope for better times ahead.”

Sichel and his coauthors also acknowledge that their results raise a new puzzle. “In recent years,” they write, “the price index for computing equipment has fallen quite slowly by historical standards. If MPU prices have, in fact, continued to decline rapidly, why have prices for computers–which rely on MPUs for their performance–not followed suit?” The researchers believe it is possible that the official price indexes for computers may also suffer from measurement issues, and they are investigating this possibility in further work.

“How Fast Are Semiconductor Prices Falling,” coauthored by Daniel E. Sichel, Wellesley College and NBER; David M. Byrne, Federal Reserve Board; and Stephen D. Oliner, American Enterprise Institute and UCLA, is available as an NBER working paper and is online at http://www.nber.org/papers/w21074 and https://www.aei.org/publication/how-fast-are-semiconductor-prices-falling/.

The Semiconductor Industry Association (SIA) today announced worldwide sales of semiconductors reached $83.1 billion during the first quarter of 2015, an increase of 6.0 percent compared to the first quarter of 2014. Global sales for the month of March 2015 were $27.7 billion, 6.0 percent higher than the March 2014 total of $26.1 billion and 0.1 percent lower than last month’s total. All monthly sales numbers are compiled by the World Semiconductor Trade Statistics (WSTS) organization and represent a three-month moving average.

“Despite macroeconomic challenges, first quarter global semiconductor sales are higher than they were last year, which was a record year for semiconductor revenue,” said John Neuffer, president and CEO, Semiconductor Industry Association. “The Americas region posted its sixth straight month of double-digit, year-to-year growth to lead all regional markets, and DRAM and analog products continue to be key drivers of global sales growth.”

Regionally, sales were up compared to last month in Asia Pacific/All Other (3.1 percent), Europe (2.7 percent), and China (1.0 percent), which is broken out as a separate country in the sales data for the first time. Japan(-0.4 percent) and the Americas (-6.9 percent) both saw sales decrease compared to last month. Compared to March 2014, sales increased in the Americas (14.2 percent), China (13.3 percent), and Asia Pacific/All Other (3.8 percent), but decreased in Europe (-4.0 percent) and Japan (-9.6 percent).

“Congress is considering a legislative initiative called Trade Promotion Authority (TPA) that would help promote continued growth in the semiconductor sector and throughout the U.S. economy,” Neuffer continued. “Free trade is vital to the U.S. semiconductor industry. In 2014, U.S. semiconductor company sales totaled $173 billion, representing over half the global market, and 82 percent of those sales were to customers outside the United States. TPA paves the way for free trade, and Congress should swiftly enact it.”

March 2015
Billions
Month-to-Month Sales
Market Last Month Current Month % Change
Americas 6.23 5.80 -6.9%
Europe 2.88 2.95 2.7%
Japan 2.55 2.54 -0.4%
China 7.75 7.83 1.0%
Asia Pacific/All Other 8.33 8.59 3.1%
Total 27.74 27.71 -0.1%
Year-to-Year Sales
Market Last Year Current Month % Change
Americas 5.08 5.80 14.2%
Europe 3.08 2.95 -4.0%
Japan 2.81 2.54 -9.6%
China 6.91 7.83 13.3%
Asia Pacific/All Other 8.27 8.59 3.8%
Total 26.15 27.71 6.0%
Three-Month-Moving Average Sales
Market Oct/Nov/Dec Jan/Feb/Mar % Change
Americas 6.73 5.80 -13.8%
Europe 3.01 2.95 -1.7%
Japan 2.80 2.54 -9.1%
China 8.03 7.83 -2.5%
Asia Pacific/All Other 8.57 8.59 0.2%
Total 29.13 27.71 -4.9%

About SIA

Stiff competition in sensors for high-volume design wins and a recovery in actuator growth shuffled the ranking of suppliers in the $9.2 billion market for sensors and actuators in 2014, according to IC Insights’ new 2015 O-S-D Report—A Market Analysis and Forecast for Optoelectronics, Sensors/Actuators, and Discretes. The new O-S-D Report says the overall trend in sensors and actuators is for the largest suppliers to keep getting bigger, gaining marketshare because more high-volume applications—such as smartphones and the huge potential of the Internet of Things (IoT)—and automotive systems require well-established track records for quality, long-term reliability, and on-time delivery of semiconductors.

Sensor leader Robert Bosch in Germany extended its lead in this market with a 16 percent sales increase in 2014 to nearly $1.2 billion. The German company became the first sensor maker to reach $1.0 billion in 2013 when its sales climbed 29 percent, reflecting continued strong growth in its automotive base and expansion into high-volume consumer and mobile applications. Bosch’s marketshare in sensor-only sales grew to 20 percent in 2014 from 18 percent in 2013 and 15 percent in 2012, according to the 10th edition of IC Insights’ annual O-S-D Report.

Meanwhile, STMicroelectronics saw its sensor/actuator sales volume fall 19 percent in 2014 to $630 million, which caused it to drop to fourth place among the market’s top suppliers from second in 2013. ST’s drop was partly caused by marketshare gains by Bosch and U.S.-based InvenSense, which climbed from 14th in 2013 to ninth in the 2014 sensor/actuator ranking with a 33 percent increase in sensor sales to $332 million last year. Bosch and InvenSense sensors—which are made with microelectromechanical systems (MEMS) technology—have knocked ST’s MEMS-based sensors from a number of high-volume smartphones, including Apple’s newest iPhone handsets.

ST’s drop in sensor revenues and modest sales increases in MEMS-based actuators at Texas Instruments (micro-mirror devices for digital projectors and displays) and Hewlett-Packard (mostly inkjet-printer nozzle devices) moved TI and HP up one position in IC Insights’ 2014 ranking to second and third place, respectively (as shown in Figure 1). Infineon remained in fifth place in the sensors/actuator ranking with an 8 percent sales increase to $520 million last year. The 2015 O-S-D Report provides top 10 rankings of suppliers in sensors/actuators, optoelectronics, and discrete semiconductors in addition to a top 30 O-S-D list of companies, based on combined revenue in optoelectronics, sensors/actuators and discretes.

Figure 1

Figure 1

The new O-S-D Report forecasts worldwide sensor sales to increase 7 percent in 2015 to reach a record-high $6.1 billion after growing 5 percent in 2014 to $5.7 billion and rising just 3 percent in 2013.  Total actuator sales are expected to increase 7 percent in 2015 to $3.7 billion, which will tie the record high set in 2011. Actuator sales fell 10 percent in 2012 and dropped another 4 percent in 2013 before recovering in 2014 with a 7 percent increase to $3.5 billion.  MEMS technology was used in about 34 percent of the 11.1 billion sensors shipped in 2014 and essentially all of the 999 million actuators sold last year, based on an analysis in the new O-S-D Report.  Tiny MEMS structures are used in these devices to perform transducer functions (i.e., detecting and measuring changes around sensors for inputs in electronic systems, and initiating physical actions in actuators from electronic signals).

Researchers from the Georgia Institute of Technology have developed a novel cellular sensing platform that promises to expand the use of semiconductor technology in the development of next-generation bioscience and biotech applications.

The research is part of the Semiconductor Synthetic Biology (SSB) program sponsored and managed by Semiconductor Research Corporation (SRC). Launched in 2013, the SSB program concentrates on synergies between synthetic biology and semiconductor technology that can foster exploratory, multi-disciplinary, longer-term university research leading to novel, breakthrough solutions for a wide range of industries.

The Georgia Tech research proposes and demonstrates the world’s first multi-modality cellular sensor arranged in a standard low-cost CMOS process. Each sensor pixel can concurrently monitor multiple different physiological parameters of the same cell and tissue samples to achieve holistic and real-time physiological characterizations.

“Our research is intended to fundamentally revolutionize how biologists and bioengineers can interface with living cells and tissues and obtain useful information,” said Hua Wang, an assistant professor in the School of Electrical and Computer Engineering (ECE) at Georgia Tech. “Fully understanding the physiological behaviors of living cells or tissues is a prerequisite to further advance the frontiers of bioscience and biotechnology.”

Wang explains that the Georgia Tech research can have positive impact on semiconductors being used in the development of healthcare applications including the more cost-effective development of pharmaceuticals and point-of-care devices and low-cost home-based diagnostics and drug testing systems. The research could also benefit defense and environmental monitoring applications for low-cost field-deployable sensors for hazard detections.

Specifically, in the case of the more cost-effective development of pharmaceuticals, the increasing cost of new medicine is largely due to the high risks involved in the drug development. As a major sector of the healthcare market, the global pharmaceutical industry is expected to reach more than $1.2 trillion this year. However, on average, only one out of every ten thousand tested chemical compounds eventually become an approved drug product.

In the early phases of drug development (when thousands of chemical candidates are screened), in vitro cultured cells and tissues are widely used to identify and quantify the efficacy and potency of drug candidates by recording their cellular physiology responses to the tested compounds, according to the research.

Moreover, patient-to-patient variations often exist even under the administration of the same type of drugs at the same dosage. If the cell samples are derived from a particular patient, patient-specific drug responses then can be tested, which opens the door to future personalized medicine.

“Therefore, there is a tremendous need for low-cost sensing platforms to perform fast, efficient and massively parallel screening of in vitro cells and tissues, so that the promising chemical candidates can be selected efficiently,” said Wang, who also holds the Demetrius T. Paris Junior Professorship in the Georgia Tech School of ECE. “This existing need can be addressed directly by our CMOS multi-modality cellular sensor array research.”

Among the benefits enabled by the CMOS sensor array chips are that they provide built-in computation circuits for in-situ signal processing and sensor fusion on multi-modality sensor data. The chips also eliminate the need of external electronic equipment and allow their use in general biology labs without dedicated electronic or optical setups.

Additionally, thousands of sensor array chips can operate in parallel to achieve high-throughput scanning of chemicals or drug candidates and real-time monitoring of their efficacy and toxicity. Compared with sequential scanning through limited fluorescent scanners, this parallel scanning approach can achieve more than 1,000 times throughput enhancement.

The Georgia Tech research team just wrapped its first year of research under the 3-year project, with the sensor array being demonstrated at the close of 2014 and presented at the IEEE International Solid-State Circuits Conference (ISSCC) in February 2015. In the next year, the team plans to further increase the sensor array pixel density while helping improve packaging solutions compatible with existing drug testing solutions. 

“Georgia Tech’s research combines semiconductor integrated circuits and living cells to create an electronics-biology hybrid platform, which has tremendous societal and technological implications that can potentially lead to better and cheaper healthcare solutions,” said Victor Zhirnov, director of Cross-Disciplinary Research and Special Projects at SRC.