Category Archives: MEMS

Today, SEMI announced that SEMICON Europa 2015, the region’s largest microelectronics manufacturing event, will offer new themes to support the semiconductor industry’’s development in Europe. The exposition and conferences will take place in Dresden on October 6-8. SEMICON Europa will feature over 100 hours of technical sessions and presentations addressing the critical issues and challenges facing the microelectronics industries. Registration for visitors and conference participants opens today.

For the first time, SEMICON Europa will offer specific sessions on microelectronics in the automotive and medical technology segments as well as events focusing on microelectronics for the smart factory of the future. “SEMICON Europa will be the forum bringing semiconductor technology in direct contact with the industries that are driving chip usage the most right now,” explains Stephan Raithel, managing director in Berlin at SEMI. “The largest growth rates over the next few years will be in the automotive industry, medical technology, and communication technology – exactly the application areas that we are focusing on at SEMICON Europa this year.”

Materials and equipment for the semiconductor industry will remain the core of SEMICON Europa 2015. However, programs will also include new areas including imaging, low power, and power electronics. In addition, Plastic Electronics 2015, the world’s largest conference with exhibitions in the field of flexible, large-scale and organic electronics, will complement SEMICON Europa. In all, the SEMICON Europa 2015 conference program includes over 40 trade conferences and high-quality discussion forums.

At the Fab Managers Forum, Reinhard Ploss, CEO of Infineon Technologies AG, and Hans Vloeberghs, European Business director of Fujifilm, will be the keynote speakers, focusing on how the European semiconductor industry can improve its competitiveness. The Semiconductor Technology Conference, focusing on productivity enhancements for future advanced technology nodes in semiconductor technology, features keynote speakers Peter Jenkins, VP of Marketing at ASML; Niall MacGearailt, Advanced Manufacturing Research program manager at Intel; and Paul Farrar, GM for the consortium G450C at SUNY Polytechnic Institute’s Colleges of Nanoscale Science and Engineering, which works on creating the conditions necessary for producing chips on 450mm wafers.

New at SEMICON Europa 2015: SEMI and its German partner HighTech Startbahn are expanding the Innovation Village. Innovation Village is the ideal forum for European startups and high-growth businesses in search of investors. Sixty start-up/young businesses will have the opportunity to present their ideas and their business model to potential investors and industry partners. The application deadline is June 15.

Over 400 exhibitors at SEMICON Europa represent the suppliers of Europe’s leading microelectronics companies. From wafers to the finished product and every element in between, SEMICON Europa displays the best of the microelectronics manufacturing. The exhibitor markets include semiconductors, MEMS, consumables, device fabrication, wafer processing, materials, assembly and packaging, process, test, and components.

To learn more (exhibition or registration), please visit: www.semiconeuropa.org/en.

Europe’s leading nanoelectronics institutes, Tyndall National Institute in Ireland, CEA-Leti in France and imec in Belgium, have entered a €4.7 million collaborative open-access project called ASCENT (Access to European Nanoelectronics Network). The project will mobilize European research capabilities at an unprecedented level and create a unique research infrastructure that will elevate Europe’s nanoelectronics R&D and manufacturing community.

ASCENT opens the doors to the world’s most advanced nanoelectronics infrastructures in Europe. Tyndall National Institute in Ireland, CEA-Leti in France and imec in Belgium, leading European nanoelectronics institutes, have entered into a collaborative open-access project called ASCENT (Access to European Nanoelectronics Network), to mobilise European research capabilities like never before.

The €4.7 million project will make the unique research infrastructure of three of Europe’s premier research centres available to the nanoelectronics modelling-and-characterisation research community.

ASCENT will share best scientific and technological practices, form a knowledge-innovation hub, train new researchers in advanced methodologies and establish a first-class research network of advanced technology designers, modellers and manufacturers in Europe. All this will strengthen Europe’s knowledge in the integral area of nanoelectronics research.

The three partners will provide researchers access to advanced device data, test chips and characterisation equipment.  This access programme will enable the research community to explore exciting new developments in industry and meet the challenges created in an ever-evolving and demanding digital world.

The partners’ respective facilities are truly world-class, representing over €2 billion of combined research infrastructure with unique credentials in advanced semiconductor processing, nanofabrication, heterogeneous and 3D integration, electrical characterisation and atomistic and TCAD modelling. This is the first time that access to these devices and test structures will become available anywhere in the world.

The project will engage industry directly through an ‘Industry Innovation Committee’ and will feed back the results of the open research to device manufacturers, giving them crucial information to improve the next generation of electronic devices.

Speaking on behalf of project coordinator, Tyndall National Institute, CEO Dr. Kieran Drain said: “We are delighted to coordinate the ASCENT programme and to be partners with world-leading institutes CEA-Leti and imec. Tyndall has a great track record in running successful collaborative open-access programmes, delivering real economic and societal impact. ASCENT has the capacity to change the paradigm of European research through unprecedented access to cutting-edge technologies. We are confident that ASCENT will ensure that Europe remains at the forefront of global nanoelectronics development.”

“The ASCENT project is an efficient, strategic way to open the complementary infrastructure and expertise of Tyndall, Leti and imec to a broad range of researchers from Europe’s nanoelectronics modelling-and-characterisation sectors,” said Leti CEO MarieNoëlle Semeria. “Collaborative projects like this, that bring together diverse, dedicated and talented people, have synergistic affects that benefit everyone involved, while addressing pressing technological challenges.”

“In the frame of the ASCENT project, three of Europe’s leading research institutes – Tyndall, imec and Leti – join forces in supporting the EU research and academic community, SMEs and industry by providing access to test structures and electrical data of state-of-the-art semiconductor technologies,” stated Luc Van den hove, CEO of imec. “This will enable them to explore exciting new opportunities in the ‘More Moore’ as well as the ‘More than Moore’ domains, and will allow them to participate and compete effectively on the global stage for the development of advanced nano-electronics.”

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No. 65384.

CEA-Leti is hosting its seventh workshop on innovative memory technologies following the 17th annual LetiDays Grenoble, June 24-25, on the Minatec campus.

Topics at LetiWorkshop Memory on June 26 will range from short-term to long-term memory solutions, including:

  • Flash memories for embedded or stand-alone applications
  • Resistive memory technologies, such as phase-change memories, conductive bridging memories, oxide-based memories
  • Innovative ideas covering non-volatile logics and bio-inspired architectures

The workshop will feature presentations by industrial and academic researchers with two main sessions in the morning. The first one, “NVM vision on standalone and embedded markets”, includes presentations by STMicroelectronics, Silicon Storage Technology and HGST, and the second one, “Emerging memory opportunities,” includes talks from Yole, IBM and Micron.

The afternoon is dedicated to niche applications and outlooks such as “NVM in disruptive applications”. This session will include talks on security applications, radiation effects and FPGA. The final session, “Memories for biomedical & neuromorphic applications”, features talks from Clinatec and the University of Milan.

Invited speakers are:

– STMicroelectronics, Delphine Maury

– SST, Nhan Do

– HGST, Jeff Childress

– CEA-Leti, Fabien Clermidy

– Yole Developpement, Yann De Charentenay

– IBM, Milos Stanisavljevic

– CEA-Leti, Gabriel Molas

– Micron, Innocenzo Tortorelli

 

–      CEA-Tech, Romain Wacquez

–      University of Padova, Alessandro Paccagnella

–      CEA-Leti, Boubacar Traore

–      CEA-Leti, Jeremy Guy

–      CEA-Clinatec, François Berger

–      University of Milan, Daniele Ielmini

–      CEA-Leti, Daniele Garbin

 

Visit LetiWorkshop Memory for registration and other information.

Two young researchers working at the MIPT Laboratory of Nanooptics and Plasmonics, Dmitry Fedyanin and Yury Stebunov, have developed an ultracompact highly sensitive nanomechanical sensor for analyzing the chemical composition of substances and detecting biological objects, such as viral disease markers, which appear when the immune system responds to incurable or hard-to-cure diseases, including HIV, hepatitis, herpes, and many others. The sensor will enable doctors to identify tumor markers, whose presence in the body signals the emergence and growth of cancerous tumors.

This image shows the principle of the sensor. CREDIT: Dmitry Fedyanin and Yury Stebunov

This image shows the principle of the sensor.
CREDIT: Dmitry Fedyanin and Yury Stebunov

The sensitivity of the new device is best characterized by one key feature: according to its developers, the sensor can track changes of just a few kilodaltons in the mass of a cantilever in real time. One Dalton is roughly the mass of a proton or neutron, and several thousand Daltons are the mass of individual proteins and DNA molecules. So the new optical sensor will allow for diagnosing diseases long before they can be detected by any other method, which will pave the way for a new-generation of diagnostics.

The device, described in an article published in the journal Scientific Reports, is an optical or, more precisely, optomechanical chip. “We’ve been following the progress made in the development of micro- and nanomechanical biosensors for quite a while now and can say that no one has been able to introduce a simple and scalable technology for parallel monitoring that would be ready to use outside a laboratory. So our goal was not only to achieve the high sensitivity of the sensor and make it compact, but also make it scalabile and compatibile with standard microelectronics technologies,” the researchers said.

Unlike similar devices, the new sensor has no complex junctions and can be produced through a standard CMOS process technology used in microelectronics. The sensor doesn’t have a single circuit, and its design is very simple. It consists of two parts: a photonic (or plasmonic) nanowave guide to control the optical signal, and a cantilever hanging over the waveguide.

A cantilever, or beam, is a long and thin strip of microscopic dimensions (5 micrometers long, 1 micrometer wide and 90 nanometers thick), connected tightly to a chip. To get an idea how it works, imagine you press one end of a ruler tightly to the edge of a table and allow the other end to hang freely in the air. If you touch the latter with your other hand and then take your hand away, the ruler will start making mechanical oscillations at a certain frequency. That’s how the cantilever works. The difference between the oscillations of the ruler and the cantilever is only the frequency, which depends on the materials and geometry: while the ruler oscillates at several tens of hertz, the frequency of the cantilever’s oscillations is measured in megahertz. In other words, it makes a few million oscillations per second!

There are two optical signals going through the waveguide during oscillations: the first one sets the cantilever in motion, and the second one allows for reading the signal containing information about the movement. The inhomogeneous electromagnetic field of the control signal’s optical mode transmits a dipole moment to the cantilever, impacting the dipole at the same time so that the cantilever starts to oscillate.

The sinusoidally modulated control signal makes the cantilever oscillate at an amplitude of up to 20 nanometers. The oscillations determine the parameters of the second signal, the output power of which depends on the cantilever’s position.

The highly localized optical modes of nanowave guides, which create a strong electric field intensity gradient, are key to inducing cantilever oscillations. Because the changes of the electromagnetic field in such systems are measured in tens of nanometers, researchers use the term “nanophotonics” – so the prefix “nano” is not used here just as a fad! Without the nanoscale waveguide and the cantilever, the chip simply wouldn’t work. Abig cantilever cannot be made to oscillate by freely propagating light, and the effects of chemical changes to its surface on the oscillation frequency would be less noticeable..

Cantilever oscillations make it possible to determine the chemical composition of the environment in which the chip is placed. That’s because the frequency of mechanical vibrations depends not only on the materials’ dimensions and properties, but also on the mass of the oscillatory system, which changes during a chemical reaction between the cantilever and the environment. By placing different reagents on the cantilever, researchers make it react with specific substances or even biological objects. If you place antibodies to certain viruses on the cantilever, it’ll capture the viral particles in the analyzed environment. Oscillations will occur at a lower or higher amplitude depending on the virus or the layer of chemically reactive substances on the cantilever, and the electromagnetic wave passing through the waveguide will be dispersed by the cantilever differently, which can be seen in the changes of the intensity of the readout signal.

Calculations done by the researchers showed that the new sensor will combine high sensitivity with a comparative ease of production and miniature dimensions, allowing it to be used in all portable devices, such as smartphones, wearable electronics, etc. One chip, several millimeters in size, will be able to accommodate several thousand such sensors, configured to detect different particles or molecules. The price, thanks to the simplicity of the design, will most likely depend on the number of sensors, being much more affordable than its competitors.

Scientists at the U.S. Department of Energy’s Argonne National Laboratory have found a way to use tiny diamonds and graphene to give friction the slip, creating a new material combination that demonstrates the rare phenomenon of “superlubricity.”

Led by nanoscientist Ani Sumant of Argonne’s Center for Nanoscale Materials (CNM) and Argonne Distinguished Fellow Ali Erdemir of Argonne’s Energy Systems Division, the five-person Argonne team combined diamond nanoparticles, small patches of graphene – a two-dimensional single-sheet form of pure carbon – and a diamond-like carbon material to create superlubricity, a highly-desirable property in which friction drops to near zero.

According to Erdemir, as the graphene patches and diamond particles rub up against a large diamond-like carbon surface, the graphene rolls itself around the diamond particle, creating something that looks like a ball bearing on the nanoscopic level. “The interaction between the graphene and the diamond-like carbon is essential for creating the ‘superlubricity’ effect,” he said. “The two materials depend on each other.”

At the atomic level, friction occurs when atoms in materials that slide against each other become “locked in state,” which requires additional energy to overcome. “You can think of it as like trying to slide two egg cartons against each other bottom-to-bottom,” said Diana Berman, a postdoctoral researcher at the CNM and an author of the study. “There are times at which the positioning of the gaps between the eggs – or in our case, the atoms – causes an entanglement between the materials that prevents easy sliding.”

By creating the graphene-encapsulated diamond ball bearings, or “scrolls”, the team found a way to translate the nanoscale superlubricity into a macroscale phenomenon. Because the scrolls change their orientation during the sliding process, enough diamond particles and graphene patches prevent the two surfaces from becoming locked in state. The team used large-scale atomistic computations on the Mira supercomputer at the Argonne Leadership Computing Facility to prove that the effect could be seen not merely at the nanoscale but also at the macroscale.

“A scroll can be manipulated and rotated much more easily than a simple sheet of graphene or graphite,” Berman said.

However, the team was puzzled that while superlubricity was maintained in dry conditions, in a humid environment this was not the case. Because this behavior was counterintuitive, the team again turned to atomistic calculations. “We observed that the scroll formation was inhibited in the presence of a water layer, therefore causing higher friction,” explained co-author Argonne computational nanoscientist Subramanian Sankaranarayanan.

While the field of tribology has long been concerned with ways to reduce friction – and thus the energy demands of different mechanical systems – superlubricity has been treated as a tough proposition. “Everyone would dream of being able to achieve superlubricity in a wide range of mechanical systems, but it’s a very difficult goal to achieve,” said Sanket Deshmukh, another CNM postdoctoral researcher on the study.

“The knowledge gained from this study,” Sumant added, “will be crucial in finding ways to reduce friction in everything from engines or turbines to computer hard disks and microelectromechanical systems.”

Different forecasting algorithms are highlighted and a framework is provided on how best to estimate product demand using a combination of qualitative and quantitative approaches.

BY JITESH SHAH, Integrated Device Technology, San Jose, CA

Nothing in the world of forecasting is more complex than predicting demand for semiconductors, but this is one business where accurate forecasting could be a matter of long-term survival. Not only will the process of forecasting help reduce costs for the company by holding the right amount of inventory in the channels and knowing what parts to build when but implementing a robust and self-adaptive system will also keep customers happy by providing them with products they need when they need. Other benefits include improved vendor engagements and optimal resource (labor and capital) allocation.

Talking about approaches…

There are two general approaches to forecasting a time-based event; qualitative approach and quantitative or a more numbers-based approach. If historical time-series data on the variable of interest is sketchy or if the event being forecasted is related to a new product launch, a more subjective or expert-based predictive approach is necessary, but we all intui- tively know that. New product introductions usually involve active customer and vendor engagements, and that allows us to have better control on what to build, when, and in what quantity. Even with that, the Bass Diffusion Model, a technique geared towards helping to predict sales for a new product category could be employed, but that will not be discussed in this context.

Now if data on past information on the forecasted variable is handy and quantifiable and it’s fair to assume that the pattern of the past will likely continue in the future, then a more quant-based, algorithmic and somewhat automated approach is almost a necessity.

But how would one go about deciding whether to use an automated approach to forecasting or a more expert-based approach? A typical semiconductor company’s products could be segmented into four quadrants (FIGURE 1), and deciding whether to automate the process of forecasting will depend on which quadrant the product fits best.

Figure 1

Figure 1

Time series modeling

Past shipment data over time for a product, or a group of products you are trying to forecast demand for is usually readily available, and that is generally the only data you need to design a system to automate the forecasting process. The goal is to discover a pattern in the historical, time-series data and extrapolate that pattern into the future. An ideal system should be built in such a way that it evolves, or self-adapts, and selects the “right” algorithm from the pre-built toolset if shipment pattern changes. A typical time-series forecasting model would have just two variables; an independent time variable and a dependent variable representing an event we are trying to forecast.

That event Qt (order, shipment, etc.) we are trying to forecast is more or less a function of the product’s life-cycle or trend, seasonality or business cycle and randomness, shown in the “white board” style illustration of FIGURE 2.

Figure 2

Figure 2

Trend and seasonality or business cycle are typically associated with longer-range patterns and hence are best suited to be used to make long-term forecasts. A shorter-term or horizontal pattern of past shipment data is usually random and is used to make shorter-term forecasts.

Forecasting near-term events

Past data exhibiting randomness with horizontal patterns can be reasonably forecasted using either a Naïve method or a simple averaging method. The choice between the two will depend on which one gives lower Mean Absolute Error (MAE) and Mean Absolute % Error (MAPE).

Naïve Method The sample table in FIGURE 3 shows 10 weeks’ worth of sales data. Using the Naïve approach, the forecasted value for the 2nd week is just what was shipped in the 1st week. The forecasted value for the 3rd week is the actual sales value in the 2nd week and so on. The difference between the actual value and the forecasted value represents the forecast error and the absolute value of that is used to calculate the total error. MAE is just the mean of total error. A similar approach is used to calculate MAPE, but now each individual error is divided by the actual sales volume to calculate % error, which are then summed and divided by the number of forecasted values to calculate MAPE.

Figure 3

Figure 3

Averaging Instead of using the last observed event and using that to forecast the next event, a better approach would be to use the mean of all past observations and use that as the next period’s forecast. For example, the forecasted value for the 3rd week is the mean of the 1st and 2nd week’s actual sales value. The forecasted value for the 4th week is the mean of the previous three actual sales values, and so on (FIGURE 4).

Figure 4

Figure 4

MAE and MAPE for the Naïve method are 4.56 and 19% respectively, and the same for the averaging method are 3.01 and 13% respectively. Right there, one can conclude that averaging is better than the simple Naïve approach.

Horizontal Pattern with Level Shift But what happens when there is a sudden shift (anticipated or not) in the sales pattern like the one shown in FIGURE 5?

Figure 5

Figure 5

The simple averaging approach needs to be tweaked to account for that, and that is where a moving average approach is better suited. Instead of averaging across the entire time series, only 2 or 3 or 4 recent time events are used to calculate the forecast value. How many time periods to use will depend on which one gives the smallest MAE and MAPE values and that can and should be parameterized and coded. The tables in FIGURE 6 compare the two approaches, and clearly the moving average approach seems to be a better fit in predicting future events.

Figure 6

Figure 6

Exponential Smoothing But oftentimes, there is a better approach, especially when the past data exhibits severe and random level shifts.

This approach is well suited for such situations because over time, the exponentially weighted moving average of the entire time series tends to deemphasize data that is older but still includes them and, at the same time, weighs recent observations more heavily. That relationship between the actual and forecasted value is shown in FIGURE 7.

Figure 7

Figure 7

Again, the lowest MAE and MAPE will help decide the optimal value for the smoothing constant and, as always, this can easily be coded based on the data you already have, and can be automatically updated as new data trickles in.

But based on the smoothing equation above, one must wonder how the entire time series is factored in when only the most recent actual and forecasted values are used as part of the next period’s forecast. The math in FIGURE 8 explains how.

Figure 8

Figure 8

The forecast for the second period is assumed to be the first observed value. The third period is the true derived forecast and with subsequent substitu- tions, one quickly finds out that the forecast for nth period is a weighted average of all previous observed events. And the weight ascribed to later events compared to the earlier events is shown in the plot in FIGURE 9.

Figure 9

Figure 9

Making longer term forecasts

A semiconductor product’s lifecycle is usually measured in months but surprisingly, there are quite a few products with lifespans measured in years, especially when the end applications exhibit long and growing adoption cycles. These products not only exhibit shorter-term randomness in time-series but show a longer-term seasonal / cyclical nature with growing or declining trend over the years.

The first step in estimating the forecast over the longer term is to smooth out some of that short- term randomness using the approaches discussed before. The unsmoothed and smoothed curves might resemble the plot in FIGURE 10.

Figure 10

Figure 10

Clearly, the data exhibits a long-term trend along with a seasonal or cyclical pattern that repeats every year, and Ordinary Least Square or OLS regression is the ideal approach to forming a function that will help estimate that trend and the parameters involved. But before crunching the numbers, the dataset has to be prepped to include a set of dichotomous variables representing the different intervals in that seasonal behavior. Since in this situation, that seasonality is by quarters representing Q1, Q2, Q3 and Q4, only three of them are included in the model. The fourth one, which is Q=2 in this case, forms the basis upon which to measure the significance of the other three quarters (FIGURE 11).

Figure 11

Figure 11

The functional form of the forecasted value by quarter looks something like what’s shown in FIGURE 12.

Figure 12

Figure 12

The intercept b0 moves up or down based on whether the quarter in question is Q2 or not. If b2, b3 and b4 are positive, Q2 will exhibit the lowest expected sales volume. The other three quarters will show increasing expected sales in line with the increase in the respective estimated parameter values. And this equation can be readily used to reasonably forecast an event a few quarters or a few years down the road.

So there you have it. This shows how easy it is to automate some features of the forecasting process, and the importance of building an intelligent, self- aware and adaptive forecasting system. The results will not only reduce cost but help refocus your supply-chain planning efforts on bigger and better challenges.

JITESH SHAH is a principal engineer with Integrated Device Technology, San Jose, CA

Suppliers of MEMS-based devices rode a safety sensing wave in 2014 to reach record turnover in automotive applications, according to analysis from IHS, the global source of critical information and insight.

Mandated safety systems such as Electronic Stability Control (ESC) and Tire Pressure Monitoring Systems (TPMS) – which attained full implementation in new vehicles in major automotive markets last year – are currently driving revenues for MEMS sensors. Those players with strong positions in gyroscopes, accelerometers and pressure sensors needed in these systems grew as well, while companies in established areas like high-g accelerometers for frontal airbags and pressure sensors for side airbags also saw success.

Major suppliers of pressure sensors to engines similarly blossomed – for staple functions like manifold absolute air intake and altitude sensing – but also for fast-growing applications like vacuum brake boosting, gasoline direct injection and fuel system vapor pressure sensing.

Bosch was the overall number one MEMS supplier with US$790 million of devices sold last year, close to three times that of its nearest competitor, Sensata (US$268 million). Bosch has a portfolio of MEMS devices covering pressure, flow, accelerometers and gyroscopes, and also has a leading position in more than 10 key applications. The company grew strongly in ESC and roll-over detection applications, and key engine measurements like manifold absolute pressure (MAP) and mass air flow on the air intake, vacuum brake booster pressure sensing and common rail diesel pressure measurement.

Compared to 2013, Sensata jumped to second place in 2014 ahead of Denso and Freescale, largely on strength in both safety and powertrain pressure sensors, but also through its acquisition of Schrader Electronics, which provides Sensata with a leading position among tire pressure-monitoring sensor suppliers.

While Sensata is dominant in TPMS and ESC pressure sensors, it also leads in harsh applications like exhaust gas pressure measurement. Freescale, on the other hand, is second to Bosch in airbag sensors and has made great strides in its supply of pressure sensors for TPMS applications.

Despite good results in 2014, Denso dropped two places compared to its overall second place in 2013, largely as a result of the weakened Yen. Denso excelled in MAP and barometric pressure measurement in 2014, but also ESC pressure and accelerometers. Denso has leadership in MEMS-based air conditioning sensing and pressure sensors for continuous variable transmission systems, and is also a supplier of exhaust pressure sensors to a major European OEM.

Secure in its fifth place, Analog Devices was again well positioned with its high-g accelerometers and gyroscopes in safety sensing, e.g. for airbag and ESC vehicle dynamics systems, respectively.

The next three players in the top 10, in order, Infineon, Murata and Panasonic, likewise have key sensors to offer for safety. Infineon is among the leading suppliers of pressure sensors to TPMS systems, while Murata and Panasonic serve ESC with gyroscope and accelerometers to major Tier Ones.

The top 10 represents 78 percent of the automotive MEMS market volume, which reached $2.6 billion in 2014. By 2021, this market will grow to $3.4 billion, a CAGR of 3.4 percent, given expected growth for four main sensors — pressure, flow, gyroscopes and accelerometers.  In addition, night-vision microbolometers from FLIR and ULIS and humidity sensors from companies like Sensirion and E+E Elektronik for window defogging will also add to the diversity of the mix in 2021.

Auto_MEMS_H1_2015_Graphic

DLP chips from Texas Instruments for advanced infotainment displays will similarly bolster the market further in future. More details can be found in the IHS Technology H1 2015 report on Automotive MEMS.

Read more: 

What’s next for MEMS?

Growing in maturity, the MEMS industry is getting its second wind

SEMI, the global industry association for companies that supply manufacturing technology and materials to the world’s chip makers, today reported that worldwide semiconductor manufacturing equipment billings reached US$9.52 billion in the first quarter of 2015. The billings figure is 7 percent higher than the fourth quarter of 2014 and 6 percent lower than the same quarter a year ago. The data is gathered jointly with the Semiconductor Equipment Association of Japan (SEAJ) from over 100 global equipment companies that provide data on a monthly basis.

Worldwide semiconductor equipment bookings were $9.66 billion in the first quarter of 2015. The figure is 2 percent lower than the same quarter a year ago and 3 percent lower than the bookings figure for the fourth quarter of 2014.

The quarterly billings data by region in billions of U.S. dollars, quarter-over-quarter growth and year-over-year rates by region are as follows:

Region

1Q2015

4Q2014

1Q2014

1Q15/4Q14

(Q-o-Q)

1Q15/1Q14

(Y-o-Y)

Korea

2.69

2.09

2.03

29%

33%

Taiwan

1.81

2.03

2.59

-11%

-30%

North America

1.47

1.83

1.85

-19%

-20%

Japan

1.26

1.11

0.96

13%

31%

China

1.17

0.68

1.71

73%

-32%

Europe

0.69

0.58

0.58

19%

19%

Rest of World

0.43

0.59

0.42

-27%

1%

Total

9.52

8.91

10.15

7%

-6%

Source: SEMI/SEAJ June 2015; Note: Figures may not add due to rounding.

BY DR. RANDHIR THAKUR, Executive Vice President, General Manager, Silicon Systems Group, APPLIED MATERIALS, INC

For 50 years, Moore’s Law has served as a guide for technologists everywhere in the world, setting the pace for the semiconductor industry’s innovation cycle. Moore’s Law has made a tremendous impact not only on the electronics industry, but on our world and our everyday life. It led us from the infancy of the PC era, through the formative years of the internet, to the adolescence of smartphones. Now, with the rise of the Internet of Things, market researchers forecast that in the next 5 years, the number of connected devices per person will more than double, so even after 50 years we don’t see Moore’s Law slowing down.

As chipmakers work tirelessly to continue device scaling, they are encountering daunting technical and economic hurdles. Increasing complexity is driving the need for new materials and new device architectures. Enabling these innovations and the node-over-node success of Moore’s Law requires advance- ments in precision materials engineering, including precision films, materials removal, materials modification and interface engineering, supported by metrology and inspection.

Though scaling is getting harder, I am confident Moore’s Law will continue because equipment suppliers and chipmakers never cease to innovate. As we face the increasing challenges of new technology inflections, earlier engagement in the development cycle between equipment suppliers and chipmakers is required to uncover new solutions. Such early and deep collaboration is critical to delivering complex precision materials engineering solutions on time. In fact, in the mobility era, earlier and deeper collaboration across the entire value chain is essential (applications, system/hardware, fabless, foundry/IDM, equipment supplier, chemical supplier, component supplier, etc.) to accelerate time to market and extend Moore’s Law.

Today, new 3D architectures, FinFET and 3D NAND, are enabling the extension of Moore’s Law. Dense 3D structures with high aspect ratios create fundamental challenges in device manufacturing. Further, the industry has shifted much of its historical reliance from litho-enabled scaling to materials-enabled scaling, requiring thinner precision films with atomic-scale accuracy. The emphasis on thin conformal films, which can be 2000 times smaller than a human hair, makes it increasingly critical to engineer film properties and manage film interactions between adjacent film surfaces. Selective processing is also a growing requirement, particularly for the deposition and removal of films. We expect more selective applications beyond Epitaxy and Cobalt liner deposition. There will also be a major expansion of new materials in addition to the key inflection of high-k metal gate that helped to reduce power leakage issues associated with scaling.

Gordon Moore’s prediction that ignited an industry will continue to influence our way of life through a combination of architecture and material changes. New process designs and new ways to atomically deposit materials are needed. More processes will be integrated on the same platform without vacuum breaks to create pristine interfaces. As an equipment supplier, we have to manage longer R&D cycles to support the industry’s roadmap, and plan for faster ramp and yield curves. Of utmost importance is staying close to our customers to ensure we deliver solutions with the desired economic and technical benefits.

Looking at the electronics industry from where it is today out to 2020, many more devices will be in use, the world will be more connected and, particularly in emerging markets, there will be greater consumer appetite for more products with advanced features. Given these transformations and demand, I think the growth and excitement in our industry will continue for many more years, thanks to Moore’s Law.

Imagination Technologies (IMG.L) and TSMC announce a collaboration to develop a series of advanced IP subsystems for the Internet of Things (IoT) to accelerate time to market and simplify the design process for mutual customers. These IP platforms, complemented by highly optimized reference design flows, bring together the breadth of Imagination’s IP with TSMC’s advanced process technologies from 55nm down to 10nm.

The IoT IP subsystems in development include small, highly-integrated connected solutions for simple sensors which combine an entry-level M-class MIPS CPU with an ultra-low power Ensigma Whisper RPU for low-power Wi-Fi, Bluetooth Smart and 6LowPan, as well as OmniShield multi-domain hardware enforced security, and on-chip RAM and flash. The advanced RF and embedded flash capabilities from TSMC enable Imagination to push the boundaries of IoT integration.

At the higher end, highly-integrated and sophisticated audio and vision sensors will be a key component of future mutual customers’ SoCs for a wide range of IoT applications such as smart surveillance, retail analytics and autonomous vehicles. As part of the collaboration, Imagination and TSMC are working together to realize reference IP subsystems that bring together Imagination’s PowerVR multimedia IP, MIPS CPUs, Ensigma RPUs and OmniShield technology to create highly-integrated, highly-intelligent connected audio and vision sensor IP platforms. These IP subsystems will leverage advanced features such as GPU compute, power-managed CPU clusters and on-chip high-bandwidth communications, demonstrating that high-performance local processing and connectivity can be integrated efficiently and cost-effectively.

Tony King-Smith, EVP marketing, Imagination, says: “We have been working with TSMC for more than two years on advanced IP subsystems for IoT and other connected products. Many of our licensees rely on TSMC to provide them with leading-edge, low-power, high-performance silicon foundry capabilities. Through our ongoing collaboration with TSMC, we are focused on creating meaningful solutions that will help our mutual customers quickly create differentiated, secure and highly integrated products.”

Suk Lee, TSMC senior director, Design Infrastructure Marketing Division, says: “In order to simplify our customers’ designs and shorten their time-to-market, TSMC and our ecosystem partners are transitioning from chip-design enablers to subsystem enablers. We are working closely with Imagination, an established IP leader, as part of our new IoT Subsystem Enablement initiative to help companies get their IoT and connected products to market more quickly and easily.”