Category Archives: MEMS

In its Mid-Year Update to the 2018 McClean Report, IC Insights updated its forecast of sales growth for each of the 33 major IC product categories defined by WSTS (Figure 1).  IC Insights now projects that seven product categories will exceed the 16% growth rate expected from the total IC market this year. For the second consecutive year, the DRAM market is forecast to top all IC product segments with 39% growth. Overall, 13 product categories are forecast to experience double-digit growth and 28 total IC product categories are expected to post positive growth this year, down slightly from 29 segments in 2017.

Rising average selling prices for DRAM continued to boost the DRAM market through the first half of the year and into August.  However, IC Insights believes the DRAM ASP (and subsequent market growth) is at or near its peak, as a big rise in DRAM capital expenditures for planned capacity upgrades and expansions is likely put the brakes on steep market growth beginning in 2019.

In second place with 29% growth is the Automotive—Special-Purpose Logic market, which is being lifted by the growing number of onboard electronic systems now found on new cars. Backup cameras, blind-spot (lane-departure) detectors, and other “intelligent” systems are mandated or are being added across all new vehicles—entry level to luxury—and are expected to contribute to the semiconductor content per new car growing to more than $540 per vehicle in 2018.

Wireless Comm—Application-Specific Analog is forecast to grow 23% in 2018, as the world becomes increasingly dependent on the Internet and demand for wireless connectivity continues to rise. Similarly, demand for medical/health electronics systems connectivity using the Internet will help the market for Industrial/Other Application-Specific Analog outpace total IC market growth in 2018.

Among the seven categories showing better than total IC market growth this year, three are forecast to be among the largest of all IC product categories in terms of dollar volume. DRAM (#1 with $101.6 billion in sales), NAND Flash (#2 with $62.6 billion), Computer and Peripherals—Special Purpose Logic (#4 with $27.6 billion) prove that big markets can still achieve exceptional percentage growth.

Figure 1

Amid rapid custom silicon growth and innovation, Open-Silicon today announced the appointment of semiconductor industry veteran Anand Bariya as VP of engineering. Anand will be responsible for managing the physical implementation of silicon, and facilitating the delivery of reliable parts with predictable schedules. He will be instrumental in the strategic planning of silicon engineering, from RTL to working silicon, and will oversee Open-Silicon’s physical design teams. Anand will report to Shafy Eltoukhy, SVP of Operations and GM of Open-Silicon, a SiFive company.

“I’m proud to join the team at Open-Silicon,” said Anand. “The company is a proven leader with a strong record of providing custom silicon solutions. I look forward to working closely with customers and partners to develop innovative, full turnkey custom solutions that not only meet the highest quality and reliability expectations, but are delivered with predictable schedules.”

Anand has over 25 years of experience in the semiconductor industry, both in Silicon Valley and in India. Prior to joining Open-Silicon, he served as senior director at Broadcom in India, where he managed engineering operations. Prior to that, he spent over six years at NetLogic Microsystems (later acquired by Broadcom), where he served as vice president and managing director. Before joining NetLogic, he managed the ASIC Design Center at Toshiba America. He has also held management positions at Vitesse Semiconductor, Cadence Design Systems and National Semiconductor. Anand earned a PhD in chemical and electrical engineering at Stanford University, and a bachelor’s degree in chemical engineering at the Indian Institute of Technology, Bombay.

“Anand brings extensive leadership and a deep understanding of the engineering required for complex SoC design and delivery,” said Shafy Eltoukhy, SVP of Operations and GM of Open-Silicon, a SiFive company. “His proven leadership and track record of execution and delivery will be instrumental in building on Open-Silicon’s momentum in custom SoCs.”

With the MEMS and sensors industry on the cusp of explosive growth, MITRE Corp. cyber security expert Cynthia Wright will urge industry executives to lay the groundwork for securing  hundreds of billions of autonomous mobility devices in her keynote at the 14th annual MEMS & Sensors Executive Congress (October 29-30, 2018 in Napa Valley, Calif.). Wright, a retired military officer with over 25 years of experience in national security and cyber strategy and policy, will highlight the critical importance of device security and privacy in ensuring reliability and end-user safety.

Hosted by MEMS & Sensors Industry Group (MSIG), a SEMI technology community, the event also features DARPA’s Ron Polcawich, who will introduce his agency’s innovation and production program, a government-industry collaboration that aims to dramatically speed design-to-development of MEMS.

Spurred by surging growth in autonomous mobility devices such as smartphones, smart speakers, autonomous cars, and fitness and healthcare wearables, the global market for MEMS and sensors is expected to double in the next five years, reaching $100B by 2023.[1] Featured speakers at MEMS & Sensors Executive Congress will examine the enabling role of MEMS and sensors in these diverse intelligent applications.

  • Autonomous and Electric Cars: What’s in for Conventional MEMS & Sensors? – Jérémie Bouchaud, IHS Markit
  • Status, Challenges and Opportunities of the 2018 MEMS & Sensors Industry – Guillaume Girardin, Yole Développement
  • Smart Ear: Will Innovation Lead to Technology with Human-like Audio Capabilities? – Andreas Kopetz, Infineon Technologies AG
  • Sensors in Food and Agriculture – David Mount, ULVAC
  • Environmental Sensor Systems Enabling Autonomous Mobility – Marcellino Gemelli, Bosch Sensortec
  • It’s Time for Wearables to Revolutionize Healthcare – Craig Easson and Sudir Mulpuru, Maxim Integrated

Special Events

  • Technology Showcase – Finalists will compete for audience votes as they demo their MEMS/sensors-enabled mobility products.
  • Alquimista Cellars Wine Tasting and Dinner on Monday, October 29

MSEC will take place October 29-30, 2018, at the Silverado Resort and Spa in Napa Valley, Calif.

By Serena Brischetto

SEMI’s Serena Brischetto caught up with Zimmer and Peacock Director Martin Peacock to discuss sensor opportunities and challenges ahead of the European MEMS & Sensors and Imaging & Sensors Summits.

SEMI: Sensors  enable  a  myriad  of  sensors  and  applications,  from  measuring  caffeine  in  coffee and  the  hotness  of  chillies  and  ions  in  the  blood  of  patients,  to  the  detecting sulfite  levels  in  wine. But  what is,  in  your  opinion,  is  the  hottest  application  today?

Peacock: The  hot  topic  now  is  point-of-care  testing  for  medical  diagnostics  and  wearable  biosensors  including  continuous  glucose  monitoring  sensors  for  Type  1  Diabetics.  At  the  moment,  there  are  three  CGM  market leaders:  Dexcom,  Abbott  and  Medtronic. But in  addition several  companies  are currently  developing  CGM  technologies.

SEMI: What are engineers working on to improve sensors’ efficiency?

Peacock: Though  many  groups  are  working  on  increasing  sensor  sensitivity,  the  big  issues  are  manufacturing  and  the  repeatability  of  manufacturing.  Our  engineers  are  currently  working  on  making  our  manufacturing  repeatable.

The  issue  with  biosensors  and  medical  diagnostics  is  that  the  volumes  of  sensors  are  much  lower  than  the  manufacturing  volumes  traditionally  experienced  in  the  semi-conductor  industry. This  is  simply  due  to  the  fact  the  human  health  market  is  a  very  fragmented  market  and  so,  outside  of  diabetes,  it  is  hard  to  identify  a  high-volume  biosensor  or  medical  diagnostic  that  is  required  at  the  volumes  that  the  semiconductor  industry  would  consider  high  volume.

SEMI: And what are the main challenges?

Peacock: Making  biosensors  at  high  volume,  with  a  tight  tolerance  and  at  a  low  cost.  As  discussed  above,  the  issue  with  biosensors  is  they  are  not  necessarily  required  art  high  volumes,  so  a  manufacture  is  trying  to  produce  high-quality  products  but  where  the  manufacturing  volumes  are  relatively  low – all  the  while trying  to  do  this  at  a  price  point  that  the  market  can  bear.  To  summarise,  the  main  challenge  in  biosensors  one  would  say  ‘this  is  a  very  fragmented  market.’

SEMI: What techniques are currently being deployed by Zimmer and Peacock to overcome those challenges?

Peacock: Zimmer  and  Peacock  has  a  proprietary  database  system  for  organizing  our  development  and  manufacturing  data  so  we  can  track  manufacturing  quality  and  determine  how  we  are  performing. We are  dealing  with  the  fragmented  market  by  having  a  platform  approach  where  we  are  ensuring  that  all  our  clients  are  sharing  the  same  supply  chain  up  to  the  point  where  we  functionalise the  biosensors  with  their  specific  biochemistry. This  means  that  our  clients  are  getting  the  economies  of  scale,  even  though  they  require  their  products  in  relatively  small  volume.

SEMI: What do you expect from SEMI European MEMS & Sensors Summit 2018 and why do you recommend attending in Grenoble?

Peacock: Zimmer  and  Peacock  expects  to  meet  inspiring  experts  who  share  our  own  vision. This  vision  is  that  MEMs  and  Sensors  are  a  critical  part  of  a  number  of  social  and  commercial  revolutions,  including  the  Internet  of  Things  (IoT),  Sensor  Web  and  the  growth  of  the  Invitro  Diagnostics  Market  (IVD). We  are  also  interested  in  finding  supplier  who  can  be  part  of  our  supply  chain.

Serena is a marketing and communications manager at SEMI Europe.

A new wearable ultrasound patch that non-invasively monitors blood pressure in arteries deep beneath the skin could help people detect cardiovascular problems earlier on and with greater precision. In tests, the patch performed as well as some clinical methods to measure blood pressure.

Applications include real-time, continuous monitoring of blood pressure changes in patients with heart or lung disease, as well as patients who are critically ill or undergoing surgery. The patch uses ultrasound, so it could potentially be used to non-invasively track other vital signs and physiological signals from places deep inside the body.

A team of researchers led by the University of California San Diego describe their work in a paper published Sept. 11 in Nature Biomedical Engineering.

Wearable ultrasound patch tracks blood pressure in a deep artery or vein. Credit:
Chonghe Wang/Nature Biomedical Engineering

“Wearable devices have so far been limited to sensing signals either on the surface of the skin or right beneath it. But this is like seeing just the tip of the iceberg,” said Sheng Xu, a professor of nanoengineering at the UC San Diego Jacobs School of Engineering and the corresponding author of the study. “By integrating ultrasound technology into wearables, we can start to capture a whole lot of other signals, biological events and activities going on way below the surface in a non-invasive manner.”

“We are adding a third dimension to the sensing range of wearable electronics,” said Xu, who is also affiliated with the Center for Wearable Sensors at UC San Diego.

The new ultrasound patch can continuously monitor central blood pressure in major arteries as deep as four centimeters (more than one inch) below the skin.

Physicians involved with the study say the technology would be useful in various inpatient procedures.

“This has the potential to be a great addition to cardiovascular medicine,” said Dr. Brady Huang, a co-author on the paper and radiologist at UC San Diego Health. “In the operating room, especially in complex cardiopulmonary procedures, accurate real-time assessment of central blood pressure is needed–this is where this device has the potential to supplant traditional methods.”

A convenient alternative to clinical methods

The device measures central blood pressure–which differs from the blood pressure that’s measured with an inflatable cuff strapped around the upper arm, known as peripheral blood pressure. Central blood pressure is the pressure in the central blood vessels, which send blood directly from the heart to other major organs throughout the body. Medical experts consider central blood pressure more accurate than peripheral blood pressure and also say it’s better at predicting heart disease.

Measuring central blood pressure isn’t typically done in routine exams, however. The state-of-the-art clinical method is invasive, involving a catheter inserted into a blood vessel in a patient’s arm, groin or neck and guiding it to the heart.

A non-invasive method exists, but it can’t consistently produce accurate readings. It involves holding a pen-like probe, called a tonometer, on the skin directly above a major blood vessel. To get a good reading, the tonometer must be held steady, at just the right angle and with the right amount of pressure each time. But this can vary between tests and different technicians.

“It’s highly operator-dependent. Even with the proper technique, if you move the tonometer tip just a millimeter off, the data get distorted. And if you push the tonometer down too hard, it’ll put too much pressure on the vessel, which also affects the data,” said co-first author Chonghe Wang, a nanoengineering graduate student at UC San Diego. Tonometers also require the patient to sit still–which makes continuous monitoring difficult–and are not sensitive enough to get good readings through fatty tissue.

The UC San Diego-led team has developed a convenient alternative–a soft, stretchy ultrasound patch that can be worn on the skin and provide accurate, precise readings of central blood pressure each time, even while the user is moving. And it can still get a good reading through fatty tissue.

The patch was tested on a male subject, who wore it on the forearm, wrist, neck and foot. Tests were performed both while the subject was stationary and during exercise. Recordings collected with the patch were more consistent and precise than recordings from a commercial tonometer. The patch recordings were also comparable to those collected with a traditional ultrasound probe.

Making ultrasound wearable

“A major advance of this work is it transforms ultrasound technology into a wearable platform. This is important because now we can start to do continuous, non-invasive monitoring of major blood vessels deep underneath the skin, not just in shallow tissues,” said Wang.

The patch is a thin sheet of silicone elastomer patterned with what’s called an “island-bridge” structure–an array of small electronic parts (islands) that are each connected by spring-shaped wires (bridges). Each island contains electrodes and devices called piezoelectric transducers, which produce ultrasound waves when electricity passes through them. The bridges connecting them are made of thin, spring-like copper wires. The island-bridge structure allows the entire patch to conform to the skin and stretch, bend and twist without compromising electronic function.

The patch uses ultrasound waves to continuously record the diameter of a pulsing blood vessel located as deep as four centimeters below the skin. This information then gets translated into a waveform using customized software. Each peak, valley and notch in the waveform, as well as the overall shape of the waveform, represents a specific activity or event in the heart. These signals provide a lot of detailed information to doctors assessing a patient’s cardiovascular health. They can be used to predict heart failure, determine if blood supply is fine, etc.

Next steps

Researchers note that the patch still has a long way to go before it reaches the clinic. Improvements include integrating a power source, data processing units and wireless communication capability into the patch.

“Right now, these capabilities have to be delivered by wires from external devices. If we want to move this from benchtop to bedside, we need to put all these components on board,” said Xu.

The team is looking to collaborate with experts in data processing and wireless technologies for the next phase of the project.

Japan is at the heart of the semiconductor industry as the era of artificial intelligence (AI) dawns. SEMICON Japan 2018 will highlight AI and SMART technologies in Japan’s industry-leading event. Registration is now open for SEMICON Japan, Japan’s largest global electronics supply chain event, December 12-14 at Tokyo Big Sight in Tokyo.

Themed “Dreams Start Here,” SEMICON Japan 2018 reflects the promise of AI, Internet of Things (IoT) and other SMART technologies that are shaping the future. Japan is positioned to help power a semiconductor industry expansion that is enabling this new path ahead, supplying one-third of the world’s semiconductor equipment and half of its chip IC materials.

According to VLSI Research, seven of the world’s top 15 semiconductor equipment manufacturers in 2017 are headquartered in Japan. In the semiconductor materials market, Japanese companies dominate silicon wafers, photoresists, sputtering targets, bonding wires, lead frames, mold compounds and more. For SEMICON Japan visitors, the event is the ideal platform for connecting with Japan’s leading suppliers.

The SMART Application Zone at SEMICON Japan will once again connect SMART industries with the semiconductor supply chain to foster collaboration across the electronics ecosystem.

SEMICON Japan Keynotes

SEMICON Japan opening keynotes will feature two young leaders of Japan’s information and communications technology (ICT) industry sharing their vision for the industry:

Motoi Ishibashi, CTO of Rhizomatiks, will discuss the latest virtual and mixed reality technologies. Rhizomatiks, a Japanese media art company that staged the Rio Olympic Games closing ceremony, will orchestrate the opening performance at SEMICON Japan 2018. The company is dedicated to creating large-scale commercial projects combining technology with the arts.

Toru Nishikawa, president and CEO at Preferred Networks, will explore computer requirements for enabling deep learning applications. Preferred Networks, a deep-learning research startup, is conducting collaborative research with technology giants including Toyota Motors, Fanuc, NVIDIA, Intel and Microsoft.

Registration

For more information and to register for SEMICON Japan, visit www.semiconjapan.org/en/. Registration for the opening keynotes and other programs will open October 1.

By Michael Droeger

Are you ready for a shared economy where your transportation needs are no longer met by an automaker, but rather a “mobility service provider”? While smart transportation news has mostly focused on the likes of electrification (Tesla) and autonomy (Waymo), the real changes in transportation may be more fundamental than self-driving electric cars. According to presenters at this week’s Smart Automotive Summit at SEMICON Taiwan, new technologies won’t just make cars smarter: they will transform the way we see and use transportation in myriad ways.

Constance Chen, public relations general manager for forum sponsor Mercedes Benz, opened with a brief overview of parent Daimler’s evolving approach to transportation, dubbed CASE, which stands for Connected, Autonomous, Shared and Services, and Electric.

“The fundamental value of vehicles is changing,” Chen said, and car ownership is one of the biggest changes. Ride-sharing services like Uber and Lyft, and shared car services like ZipCar and DriveNow, are already addressing the transportation needs of a growing urban population that eschews car ownership. Traffic congestion, parking challenges, and a desire to improve air quality are key drivers (no pun intended) moving people away from car ownership to embrace shared transportation solutions.

Indeed, societal considerations are as challenging as some technological hurdles facing autonomous vehicle development. Robert Brown, Taiwan operations manager for Magma Electronics, listed his top five challenges for autonomous transportation:

  1. Perception (vision, sensors)
  2. Assessment (ability of systems to analyze data)
  3. Control (need for faster-than-human response)
  4. Communication (vehicle-to-vehicle, vehicle-to-everything)
  5. Expectations—specifically people’s expectations of the value autonomous transportation should deliver

As people change the way they view transportation and begin to understand what is possible when they can relinquish control of their vehicle, they’re transportation needs and expectations are likely to change. The challenges are, of course, also an opportunity to deliver a wide range of services, including information, entertainment, and retail, which opens the door for traditional carmakers to position themselves more as service providers like Mercedes Benz.

For those who have grown up with traditional car ownership and the perceived freedom that owning allows one to go anywhere at anytime, the idea of giving up their car—one that they drive themselves—might seem beyond the pale. But as ride-sharing services are already showing, a growing portion of our population seems more than ready to embrace a shared and autonomous future.

The SEMICON Taiwan Smart Automotive Summit is part of SEMI’s Smart Transportation initiative focusing on automotive electronics, a top priority for SEMI and its 2,000+ members. SEMI’s industry standards, technology communities, roadmap efforts, EH&S/regulatory activities and other global platforms and communities bring together the automotive and semiconductor supply chains to collaborate, increase cross-industry efficiencies and shorten the time to better business results.

Michael Droeger is director of marketing at SEMI. 

Originally published on the SEMI blog.

By Serena Brischetto

SEMI spoke with Christian Mandl, Senior Director for Human Machine Interface (HMI), Infineon Technologies AG, ahead of the European MEMS & Sensors Summit. Mandl discussed how the sensing capabilities of machines are getting ever closer to the five human senses, allowing machines to comprehend the environment before acting.

SEMI: What’s it like to lead the Human Machine Interface (HMI) group at Infineon?

Mandl: This example of contextually aware smart devices describes our challenge very well. Devices need to be aware of their surroundings to better adapt their configurations to each specific user. In other words, provide consumers with a more personalized experience. If machines understand the context around them better, their decision-making capabilities are improved, just like humans! Sensor fusion is the key enabler to contextual awareness. Thanks to sensor fusion, machines can provide more reliable feedback based on data from different sensors taken in the same situation, thus making the system more robust. Compared to traditional devices, false positives and false negatives are reduced to make the whole solution smarter.

The challenge we are addressing within the HMI group at Infineon is to enable systems that are aware of their surroundings by combining our best-in-class sensors with sophisticated machine learning algorithms. We create solutions that can better sense the environment around the device, to then trigger user-specific reactions. This is what we call intuitive sensing.

SEMI: Will you elaborate on this challenge? What are the greatest difficulties in combining existing technologies and devices with sensors?

Mandl:

The traditional approach to add sensors to technology has been very simplistic. For example, radar sensors for presence detection typically provide you with the distance to the closest object and trigger a specific action. This approach works but is limited in the amount of use cases it can address since it is not customizable. By using sensor fusion with the sophisticated machine learning techniques, the solutions are becoming robust and stable. When equipping smart speakers with our microphones and radar sensors, they can detect a user’s presence and track location and motion. When adding advanced algorithms such as beam-forming, the audio reception beam can be steered towards the user and filter out noises for more clear understanding of commands.

The market is demanding more of these innovative ready-to-use solutions. Delivering these solutions requires a thorough evaluation based on very strong knowledge of the sensing elements and the raw data they provide. Infineon has a leading edge here, with more than 40 years’ experience in sensing solutions and a deep-rooted system understanding, to create the ready-to-use sensor solutions demanded by the market.

SEMI: You mentioned that data is key to technological development. Re-innovating our world depends on the quality of valuable and secured data about the environment, and what is done with it. How do you make this possible?

Mandl: Indeed, collecting valuable and trustworthy information is critical for any application, as mislabeled or incorrect data reduces the accuracy of any solution. Using reliable and secured sensors is the first critical step towards high quality data. This is where Infineon´s XENSIVTMsensor portfolio plays a crucial role. Our sensors are exceptionally reliable thanks to our industry-leading technologies, and they are the perfect fit for various applications in automotive, industrial and consumer markets. With clean-labeled data in hand and a good understanding of each use-case, we can drastically improve the probability of detecting an event.

SEMI: Can you further explain the sensor fusion concepts that you are working on to connect the real world with the digital world by sensors?

Mandl: A good example is the integration of radar sensors into smart speakers, which improves tremendously the capabilities of current devices to understand the real world and enables numerous use cases that were not possible before.

Starting with keyword-less interactions with technology, the next generation of IoT devices with capabilities to locate and track users will be capable of adjusting the intelligent actions to your position. For example, when we ask our smart speaker in the living room to “turn on the lights” or “play music,” only the lights and speakers in the user´s surroundings should be activated, and not the ones in the kitchen. When walking into another room, the music and light should be capable of following the user´s position and shift flawlessly into the new room. Precise presence detection and tracking by radar will enable optimal interaction with consumers for a more clear understanding of commands and a flawless user experience. It should also create power savings for the smart home by switching off lights and other devices when no one is around.

SEMI: Machines sensing capabilities are getting closer to the five human senses as they understand the environment before acting. What will the new wave of applications include with regard to consumer markets?

Mandl: The potential of sensor fusion to enhance the sensing capabilities of machines cannot yet be imagined. There are innumerable use cases that can be enabled with the right combination of sensors, data processing algorithms and machine learning tools. Smart devices will be more aware of the situation and anticipate their actions to user commands, leading to the era of intuitive sensing. Imagine a world where you can communicate with your smart device like you talk to another human being!

Thanks to the advanced intelligence that we bring with our HMI group, devices will have a sensor brain for use case-specific matching of multiple sensor fusion data with the customer needs for each application. Not only the smart speaker market will experience this transformation, but also other IoT devices in areas such as home security or user authentication, or wearables for optimized wellbeing tracking and monitoring. Devices will be capable of achieving more if provided with the right technology combination. Sensor fusion will enable technology to take better and smarter decisions in complex situations, in some cases even better than humans would.

SEMI: What do you expect from European MEMS & Sensors Summit 2018 and why do you recommend attending in Grenoble?

Mandl: This event is a great opportunity not only to stay informed and see what is happening in the MEMS and sensors industry, but also to meet current and new partners and customers. Attending is important to observe how industry leaders are working towards the latest market trends, and discuss what else can be done to make life easier, safer and greener for everyone.

Serena Brischetto is a marketing and communications manager at SEMI Europe.

Originally published on the SEMI blog.

By Dr. Eric Mounier

2017 was a good year for the MEMS and sensors business, and that upward trend should continue. We forecast extended strong growth for the sensors and actuators market, reaching more than $100 billion in 2023 for a total of 185 billion units. Optical sensors, especially CMOS image sensors, will have the lion’s share with almost 40 percent of market value. MEMS will also play an important role in that growth: During 2018–2023, the MEMS market will experience 17.5 percent growth in value and 26.7 percent growth in units, with the consumer market accounting for more than 50 percent(1)share overall.

Evolution of sensors

Sensors were first developed and used for physical sensing: shock, pressure, then acceleration and rotation. Greater investment in R&D spurred MEMS’ expansion from physical sensing to light management (e.g., micromirrors) and then to uncooled infrared sensing (e.g., microbolometers). From sensing light to sensing sound, MEMS microphones formed the next wave of MEMS development. MEMS and sensors are entering a new and exciting phase of evolution as they transcend human perception, progressing toward ultrasonic, infrared and hyperspectral sensing.

Sensors can help us to compensate when our physical or emotional sensing is limited in some way. Higher-performance MEMS microphones are already helping the hearing-impaired. Researchers at Arizona State University are among those developing cochlear implants — featuring piezoelectric MEMS sensors — which may one day restore hearing to those with significant hearing loss.

The visually impaired may take heart in knowing that researchers at Stanford University are collaborating on silicon retinal implants. Pixium Vision began clinical trials in humans in 2017 with its silicon retinal implants.

It’s not science fiction to think that we will use future generations of sensors for emotion/empathy sensing. Augmenting our reality, such sensing could have many uses, perhaps even aiding the ability of people on the autism spectrum to more easily interpret the emotions of others.

Through my years in the MEMS industry, I have identified three distinct eras in MEMS’ evolution:

  1. The “detection era” in the very first years, when we used simple sensors to detect a shock.
  2. The “measuring era” when sensors could not only sense and detect but also measure (e.g., a rotation).
  3. The “global-perception awareness era” when we increasingly use sensors to map the environment. We conduct 3D imaging with Lidar for autonomous vehicles. We monitor air quality using environmental sensors. We recognize gestures using accelerometers and/or ultrasonics. We implement biometry with fingerprint and facial recognition sensors. This is possible thanks to sensor fusion of multiple parameters, together with artificial intelligence.

Numerous technological breakthroughs are responsible for this steady stream of advancements: new sensor design, new processes and materials, new integration approaches, new packaging, sensor fusion, and new detection principles.

Global Awareness Sensing

The era of global awareness sensing is upon us. We can either view global awareness as an extension of human sensing capabilities (e.g., adding infrared imaging to visible) or as beyond-human sensing capabilities (e.g., machines with superior environmental perception, such as Lidar in a robotic vehicle). Think about Professor X in Marvel’s universe, and you can imagine how human perception could evolve in the future!

Some companies envisioned global awareness from the start. Movea (now part of TDK InvenSense), for example, began their development with inertial MEMS. Others implemented global awareness by combining optical sensors such as Lidar and night-vision sensors for robotic cars. A third contingent grouped environmental sensors (gas, particle, pressure, temperature) to check air quality. The newest entrant in this group, the particle sensor, could play an especially important role in air-quality sensing, particularly in wearable devices.

Driven by increasing societal concern over mounting evidence of global air-quality deterioration, air pollution has become a major topic in our society. Studies show that there is no safe level of particulates. Instead, for every increase in concentration of PM10 or PM2.5 inhalable particles in the air, the lung cancer rate is rising proportionately. Combining a particle sensor with a mapping application in a wearable could allow us to identify the locations of the most polluted urban zones.

The Need for Artificial Intelligence

To realize global awareness, we also need artificial intelligence (AI), but first, we have challenges to solve. Activity tracking, for example, requires accurate live classification of AI data. Relegating all AI processing to a main processor, however, would consume significant CPU resources, reducing available processing power. Likewise, storing all AI data on the device would push up storage costs. To marry AI with MEMS, we must do the following:

  1. Decouple feature processing from the execution of the classification engine to a more powerful external processor.
  2. Reduce storage and processing demands by deploying only the features required for accurate activity recognition.
  3. Install low-power MEMS sensors that can incorporate data from multiple sensors (sensor fusion) and enable pre-processing for always-on execution.
  4. Retrain the model with system-supported data that can accurately identify the user’s activities.

There are two ways to add AI and software in mobile and automotive applications. The first is a centralized approach, where sensor data is processed in the auxiliary power unit (APU) that contains the software. The second is a decentralized approach, where the sensor chip is localized in the same package, close to the software and the AI (in the DSP for a CMOS image sensor, for example). Whatever the approach, MEMS and sensors manufacturers need to understand AI, although they are unlikely to gain much value at the sensor-chip level.

Heading to an Augmented World

We have achieved massive progress in sensor development over the years and are now reaching the point when sensors can mimic or augment most of our perception: vision, hearing, touch, smell and even emotion/empathy as well as some aesthetic senses. We should realize that humans are not the only ones to benefit from these developments. Enhanced perception will also allow robots to help us in our daily lives (through smart transportation, better medical care, contextually aware environments and more). We need to couple smart sensors’ development with AI to further enhance our experiences with the people, places and things in our lives.

About the author

With almost 20 years’ experience in MEMS, sensors and photonics applications, markets, and technology analyses, Dr. Eric Mounier provides in-depth industry insight into current and future trends. As a Principal Analyst, Technology & Markets, MEMS & Photonics, in the Photonics, Sensing & Display Division, he contributes daily to the development of MEMS and photonics activities at Yole Développement (Yole). He is involved with a large collection of market and technology reports, as well as multiple custom consulting projects: business strategy, identification of investment or acquisition targets, due diligence (buy/sell side), market and technology analyses, cost modeling, and technology scouting, etc.

Previously, Mounier held R&D and marketing positions at CEA Leti (France). He has spoken in numerous international conferences and has authored or co-authored more than 100 papers. Mounier has a Semiconductor Engineering Degree and a PhD in Optoelectronics from the National Polytechnic Institute of Grenoble (France).

Mounier is a featured speaker at SEMI-MSIG European MEMS & Sensors Summit, September 20, 2018 in Grenoble, France.

Originally published on the SEMI blog.

Products built with microelectromechanical systems (MEMS) technology are forecast to account for 73% of the $9.3 billion semiconductor sensor market in 2018 and about 47% of the projected 24.1 billion total sensor units to be shipped globally this year, according to IC Insights’ 2018 O-S-D Report—A Market Analysis and Forecast for Optoelectronics, Sensors/Actuators, and Discretes.  Revenues for MEMS-built sensors—including accelerometers, gyroscope devices, pressure sensors, and microphone chips—are expected to grow 10% in 2018 to $6.8 billion compared to nearly $6.1 billion in 2017, which was a 17% increase from $5.2 billion in 2016, the O-S-D Report says.  Shipments of MEMS-built sensors are forecast to rise about 11% in 2018 to 11.1 billion after growing 19% in 2016.

An additional $5.9 billion in sales is expected to be generated in 2018 by MEMS-built actuators, which use their microelectromechanical systems transducers to translate and initiate action—such as dispensing ink in printers or drugs in hospital patients, reflecting light on tilting micromirrors in digital projectors, or filtering radio-frequency signals by converting RF to acoustic waves across structures on chips.  Total sales of MEMS-built sensors and actuators are projected to grow 10% in 2018 to $12.7 billion after increasing nearly 18% in 2017 and 15% in 2016 (Figure 1).

Figure 1

In terms of unit volume, shipments of MEMS-built sensors and actuators are expected to grow by slightly less than 12% to 13.1 billion units worldwide after climbing 20% in 2017 and rising 11% in 2016.  Total revenues for MEMS-based sensors and actuators are projected to increase by a compound annual growth rate (CAGR) of 9.2% between 2017 and 2022 to reach $17.8 billion in the final year of the forecast, according to the 2018 O-S-D Report.  Worldwide shipments of these MEMS-built semiconductors are expected to grow by a CAGR of 11.4% in the 2017-2022 period to 20.2 billion units at the end of the forecast.

One of the biggest changes expected in the five-year forecast period will be greater stability in the average selling price for MEMS-built devices and significantly less ASP erosion than in the past 10 years. The ASP for MEMS-built sensors and actuators is projected to drop by a CAGR of -2.0% between 2017 and 2022 compared to a -4.7% annual rate of decline in the 2012-2017 period and the steep CAGR plunge of -13.6% between 2007 and 2012.  The ASP for MEMS-built devices is expected to be $0.88 in 2022 versus $0.97 in 2017, $1.24 in 2012, and $2.57 in 2007, says the 2018 report.

The spread of MEMS-based sensors and actuators into a broader range of new “autonomous and “intelligent” automated applications—such as those connected to the Internet of Things (IoT) and containing artificial intelligence (AI)—will help keep ASPs from falling as much as they did in the last 10 years.  IC Insights believes many MEMS-based semiconductors are becoming more specialized for certain applications, which will help insulate them from pricing pressures in the market.