Category Archives: Manufacturing

With the MEMS and sensors industry on the cusp of explosive growth, MITRE Corp. cyber security expert Cynthia Wright will urge industry executives to lay the groundwork for securing  hundreds of billions of autonomous mobility devices in her keynote at the 14th annual MEMS & Sensors Executive Congress (October 29-30, 2018 in Napa Valley, Calif.). Wright, a retired military officer with over 25 years of experience in national security and cyber strategy and policy, will highlight the critical importance of device security and privacy in ensuring reliability and end-user safety.

Hosted by MEMS & Sensors Industry Group (MSIG), a SEMI technology community, the event also features DARPA’s Ron Polcawich, who will introduce his agency’s innovation and production program, a government-industry collaboration that aims to dramatically speed design-to-development of MEMS.

Spurred by surging growth in autonomous mobility devices such as smartphones, smart speakers, autonomous cars, and fitness and healthcare wearables, the global market for MEMS and sensors is expected to double in the next five years, reaching $100B by 2023.[1] Featured speakers at MEMS & Sensors Executive Congress will examine the enabling role of MEMS and sensors in these diverse intelligent applications.

  • Autonomous and Electric Cars: What’s in for Conventional MEMS & Sensors? – Jérémie Bouchaud, IHS Markit
  • Status, Challenges and Opportunities of the 2018 MEMS & Sensors Industry – Guillaume Girardin, Yole Développement
  • Smart Ear: Will Innovation Lead to Technology with Human-like Audio Capabilities? – Andreas Kopetz, Infineon Technologies AG
  • Sensors in Food and Agriculture – David Mount, ULVAC
  • Environmental Sensor Systems Enabling Autonomous Mobility – Marcellino Gemelli, Bosch Sensortec
  • It’s Time for Wearables to Revolutionize Healthcare – Craig Easson and Sudir Mulpuru, Maxim Integrated

Special Events

  • Technology Showcase – Finalists will compete for audience votes as they demo their MEMS/sensors-enabled mobility products.
  • Alquimista Cellars Wine Tasting and Dinner on Monday, October 29

MSEC will take place October 29-30, 2018, at the Silverado Resort and Spa in Napa Valley, Calif.

By Serena Brischetto

SEMI’s Serena Brischetto caught up with Zimmer and Peacock Director Martin Peacock to discuss sensor opportunities and challenges ahead of the European MEMS & Sensors and Imaging & Sensors Summits.

SEMI: Sensors  enable  a  myriad  of  sensors  and  applications,  from  measuring  caffeine  in  coffee and  the  hotness  of  chillies  and  ions  in  the  blood  of  patients,  to  the  detecting sulfite  levels  in  wine. But  what is,  in  your  opinion,  is  the  hottest  application  today?

Peacock: The  hot  topic  now  is  point-of-care  testing  for  medical  diagnostics  and  wearable  biosensors  including  continuous  glucose  monitoring  sensors  for  Type  1  Diabetics.  At  the  moment,  there  are  three  CGM  market leaders:  Dexcom,  Abbott  and  Medtronic. But in  addition several  companies  are currently  developing  CGM  technologies.

SEMI: What are engineers working on to improve sensors’ efficiency?

Peacock: Though  many  groups  are  working  on  increasing  sensor  sensitivity,  the  big  issues  are  manufacturing  and  the  repeatability  of  manufacturing.  Our  engineers  are  currently  working  on  making  our  manufacturing  repeatable.

The  issue  with  biosensors  and  medical  diagnostics  is  that  the  volumes  of  sensors  are  much  lower  than  the  manufacturing  volumes  traditionally  experienced  in  the  semi-conductor  industry. This  is  simply  due  to  the  fact  the  human  health  market  is  a  very  fragmented  market  and  so,  outside  of  diabetes,  it  is  hard  to  identify  a  high-volume  biosensor  or  medical  diagnostic  that  is  required  at  the  volumes  that  the  semiconductor  industry  would  consider  high  volume.

SEMI: And what are the main challenges?

Peacock: Making  biosensors  at  high  volume,  with  a  tight  tolerance  and  at  a  low  cost.  As  discussed  above,  the  issue  with  biosensors  is  they  are  not  necessarily  required  art  high  volumes,  so  a  manufacture  is  trying  to  produce  high-quality  products  but  where  the  manufacturing  volumes  are  relatively  low – all  the  while trying  to  do  this  at  a  price  point  that  the  market  can  bear.  To  summarise,  the  main  challenge  in  biosensors  one  would  say  ‘this  is  a  very  fragmented  market.’

SEMI: What techniques are currently being deployed by Zimmer and Peacock to overcome those challenges?

Peacock: Zimmer  and  Peacock  has  a  proprietary  database  system  for  organizing  our  development  and  manufacturing  data  so  we  can  track  manufacturing  quality  and  determine  how  we  are  performing. We are  dealing  with  the  fragmented  market  by  having  a  platform  approach  where  we  are  ensuring  that  all  our  clients  are  sharing  the  same  supply  chain  up  to  the  point  where  we  functionalise the  biosensors  with  their  specific  biochemistry. This  means  that  our  clients  are  getting  the  economies  of  scale,  even  though  they  require  their  products  in  relatively  small  volume.

SEMI: What do you expect from SEMI European MEMS & Sensors Summit 2018 and why do you recommend attending in Grenoble?

Peacock: Zimmer  and  Peacock  expects  to  meet  inspiring  experts  who  share  our  own  vision. This  vision  is  that  MEMs  and  Sensors  are  a  critical  part  of  a  number  of  social  and  commercial  revolutions,  including  the  Internet  of  Things  (IoT),  Sensor  Web  and  the  growth  of  the  Invitro  Diagnostics  Market  (IVD). We  are  also  interested  in  finding  supplier  who  can  be  part  of  our  supply  chain.

Serena is a marketing and communications manager at SEMI Europe.

Japan is at the heart of the semiconductor industry as the era of artificial intelligence (AI) dawns. SEMICON Japan 2018 will highlight AI and SMART technologies in Japan’s industry-leading event. Registration is now open for SEMICON Japan, Japan’s largest global electronics supply chain event, December 12-14 at Tokyo Big Sight in Tokyo.

Themed “Dreams Start Here,” SEMICON Japan 2018 reflects the promise of AI, Internet of Things (IoT) and other SMART technologies that are shaping the future. Japan is positioned to help power a semiconductor industry expansion that is enabling this new path ahead, supplying one-third of the world’s semiconductor equipment and half of its chip IC materials.

According to VLSI Research, seven of the world’s top 15 semiconductor equipment manufacturers in 2017 are headquartered in Japan. In the semiconductor materials market, Japanese companies dominate silicon wafers, photoresists, sputtering targets, bonding wires, lead frames, mold compounds and more. For SEMICON Japan visitors, the event is the ideal platform for connecting with Japan’s leading suppliers.

The SMART Application Zone at SEMICON Japan will once again connect SMART industries with the semiconductor supply chain to foster collaboration across the electronics ecosystem.

SEMICON Japan Keynotes

SEMICON Japan opening keynotes will feature two young leaders of Japan’s information and communications technology (ICT) industry sharing their vision for the industry:

Motoi Ishibashi, CTO of Rhizomatiks, will discuss the latest virtual and mixed reality technologies. Rhizomatiks, a Japanese media art company that staged the Rio Olympic Games closing ceremony, will orchestrate the opening performance at SEMICON Japan 2018. The company is dedicated to creating large-scale commercial projects combining technology with the arts.

Toru Nishikawa, president and CEO at Preferred Networks, will explore computer requirements for enabling deep learning applications. Preferred Networks, a deep-learning research startup, is conducting collaborative research with technology giants including Toyota Motors, Fanuc, NVIDIA, Intel and Microsoft.

Registration

For more information and to register for SEMICON Japan, visit www.semiconjapan.org/en/. Registration for the opening keynotes and other programs will open October 1.

By Michael Droeger

Are you ready for a shared economy where your transportation needs are no longer met by an automaker, but rather a “mobility service provider”? While smart transportation news has mostly focused on the likes of electrification (Tesla) and autonomy (Waymo), the real changes in transportation may be more fundamental than self-driving electric cars. According to presenters at this week’s Smart Automotive Summit at SEMICON Taiwan, new technologies won’t just make cars smarter: they will transform the way we see and use transportation in myriad ways.

Constance Chen, public relations general manager for forum sponsor Mercedes Benz, opened with a brief overview of parent Daimler’s evolving approach to transportation, dubbed CASE, which stands for Connected, Autonomous, Shared and Services, and Electric.

“The fundamental value of vehicles is changing,” Chen said, and car ownership is one of the biggest changes. Ride-sharing services like Uber and Lyft, and shared car services like ZipCar and DriveNow, are already addressing the transportation needs of a growing urban population that eschews car ownership. Traffic congestion, parking challenges, and a desire to improve air quality are key drivers (no pun intended) moving people away from car ownership to embrace shared transportation solutions.

Indeed, societal considerations are as challenging as some technological hurdles facing autonomous vehicle development. Robert Brown, Taiwan operations manager for Magma Electronics, listed his top five challenges for autonomous transportation:

  1. Perception (vision, sensors)
  2. Assessment (ability of systems to analyze data)
  3. Control (need for faster-than-human response)
  4. Communication (vehicle-to-vehicle, vehicle-to-everything)
  5. Expectations—specifically people’s expectations of the value autonomous transportation should deliver

As people change the way they view transportation and begin to understand what is possible when they can relinquish control of their vehicle, they’re transportation needs and expectations are likely to change. The challenges are, of course, also an opportunity to deliver a wide range of services, including information, entertainment, and retail, which opens the door for traditional carmakers to position themselves more as service providers like Mercedes Benz.

For those who have grown up with traditional car ownership and the perceived freedom that owning allows one to go anywhere at anytime, the idea of giving up their car—one that they drive themselves—might seem beyond the pale. But as ride-sharing services are already showing, a growing portion of our population seems more than ready to embrace a shared and autonomous future.

The SEMICON Taiwan Smart Automotive Summit is part of SEMI’s Smart Transportation initiative focusing on automotive electronics, a top priority for SEMI and its 2,000+ members. SEMI’s industry standards, technology communities, roadmap efforts, EH&S/regulatory activities and other global platforms and communities bring together the automotive and semiconductor supply chains to collaborate, increase cross-industry efficiencies and shorten the time to better business results.

Michael Droeger is director of marketing at SEMI. 

Originally published on the SEMI blog.

By Dr. Eric Mounier

2017 was a good year for the MEMS and sensors business, and that upward trend should continue. We forecast extended strong growth for the sensors and actuators market, reaching more than $100 billion in 2023 for a total of 185 billion units. Optical sensors, especially CMOS image sensors, will have the lion’s share with almost 40 percent of market value. MEMS will also play an important role in that growth: During 2018–2023, the MEMS market will experience 17.5 percent growth in value and 26.7 percent growth in units, with the consumer market accounting for more than 50 percent(1)share overall.

Evolution of sensors

Sensors were first developed and used for physical sensing: shock, pressure, then acceleration and rotation. Greater investment in R&D spurred MEMS’ expansion from physical sensing to light management (e.g., micromirrors) and then to uncooled infrared sensing (e.g., microbolometers). From sensing light to sensing sound, MEMS microphones formed the next wave of MEMS development. MEMS and sensors are entering a new and exciting phase of evolution as they transcend human perception, progressing toward ultrasonic, infrared and hyperspectral sensing.

Sensors can help us to compensate when our physical or emotional sensing is limited in some way. Higher-performance MEMS microphones are already helping the hearing-impaired. Researchers at Arizona State University are among those developing cochlear implants — featuring piezoelectric MEMS sensors — which may one day restore hearing to those with significant hearing loss.

The visually impaired may take heart in knowing that researchers at Stanford University are collaborating on silicon retinal implants. Pixium Vision began clinical trials in humans in 2017 with its silicon retinal implants.

It’s not science fiction to think that we will use future generations of sensors for emotion/empathy sensing. Augmenting our reality, such sensing could have many uses, perhaps even aiding the ability of people on the autism spectrum to more easily interpret the emotions of others.

Through my years in the MEMS industry, I have identified three distinct eras in MEMS’ evolution:

  1. The “detection era” in the very first years, when we used simple sensors to detect a shock.
  2. The “measuring era” when sensors could not only sense and detect but also measure (e.g., a rotation).
  3. The “global-perception awareness era” when we increasingly use sensors to map the environment. We conduct 3D imaging with Lidar for autonomous vehicles. We monitor air quality using environmental sensors. We recognize gestures using accelerometers and/or ultrasonics. We implement biometry with fingerprint and facial recognition sensors. This is possible thanks to sensor fusion of multiple parameters, together with artificial intelligence.

Numerous technological breakthroughs are responsible for this steady stream of advancements: new sensor design, new processes and materials, new integration approaches, new packaging, sensor fusion, and new detection principles.

Global Awareness Sensing

The era of global awareness sensing is upon us. We can either view global awareness as an extension of human sensing capabilities (e.g., adding infrared imaging to visible) or as beyond-human sensing capabilities (e.g., machines with superior environmental perception, such as Lidar in a robotic vehicle). Think about Professor X in Marvel’s universe, and you can imagine how human perception could evolve in the future!

Some companies envisioned global awareness from the start. Movea (now part of TDK InvenSense), for example, began their development with inertial MEMS. Others implemented global awareness by combining optical sensors such as Lidar and night-vision sensors for robotic cars. A third contingent grouped environmental sensors (gas, particle, pressure, temperature) to check air quality. The newest entrant in this group, the particle sensor, could play an especially important role in air-quality sensing, particularly in wearable devices.

Driven by increasing societal concern over mounting evidence of global air-quality deterioration, air pollution has become a major topic in our society. Studies show that there is no safe level of particulates. Instead, for every increase in concentration of PM10 or PM2.5 inhalable particles in the air, the lung cancer rate is rising proportionately. Combining a particle sensor with a mapping application in a wearable could allow us to identify the locations of the most polluted urban zones.

The Need for Artificial Intelligence

To realize global awareness, we also need artificial intelligence (AI), but first, we have challenges to solve. Activity tracking, for example, requires accurate live classification of AI data. Relegating all AI processing to a main processor, however, would consume significant CPU resources, reducing available processing power. Likewise, storing all AI data on the device would push up storage costs. To marry AI with MEMS, we must do the following:

  1. Decouple feature processing from the execution of the classification engine to a more powerful external processor.
  2. Reduce storage and processing demands by deploying only the features required for accurate activity recognition.
  3. Install low-power MEMS sensors that can incorporate data from multiple sensors (sensor fusion) and enable pre-processing for always-on execution.
  4. Retrain the model with system-supported data that can accurately identify the user’s activities.

There are two ways to add AI and software in mobile and automotive applications. The first is a centralized approach, where sensor data is processed in the auxiliary power unit (APU) that contains the software. The second is a decentralized approach, where the sensor chip is localized in the same package, close to the software and the AI (in the DSP for a CMOS image sensor, for example). Whatever the approach, MEMS and sensors manufacturers need to understand AI, although they are unlikely to gain much value at the sensor-chip level.

Heading to an Augmented World

We have achieved massive progress in sensor development over the years and are now reaching the point when sensors can mimic or augment most of our perception: vision, hearing, touch, smell and even emotion/empathy as well as some aesthetic senses. We should realize that humans are not the only ones to benefit from these developments. Enhanced perception will also allow robots to help us in our daily lives (through smart transportation, better medical care, contextually aware environments and more). We need to couple smart sensors’ development with AI to further enhance our experiences with the people, places and things in our lives.

About the author

With almost 20 years’ experience in MEMS, sensors and photonics applications, markets, and technology analyses, Dr. Eric Mounier provides in-depth industry insight into current and future trends. As a Principal Analyst, Technology & Markets, MEMS & Photonics, in the Photonics, Sensing & Display Division, he contributes daily to the development of MEMS and photonics activities at Yole Développement (Yole). He is involved with a large collection of market and technology reports, as well as multiple custom consulting projects: business strategy, identification of investment or acquisition targets, due diligence (buy/sell side), market and technology analyses, cost modeling, and technology scouting, etc.

Previously, Mounier held R&D and marketing positions at CEA Leti (France). He has spoken in numerous international conferences and has authored or co-authored more than 100 papers. Mounier has a Semiconductor Engineering Degree and a PhD in Optoelectronics from the National Polytechnic Institute of Grenoble (France).

Mounier is a featured speaker at SEMI-MSIG European MEMS & Sensors Summit, September 20, 2018 in Grenoble, France.

Originally published on the SEMI blog.

Products built with microelectromechanical systems (MEMS) technology are forecast to account for 73% of the $9.3 billion semiconductor sensor market in 2018 and about 47% of the projected 24.1 billion total sensor units to be shipped globally this year, according to IC Insights’ 2018 O-S-D Report—A Market Analysis and Forecast for Optoelectronics, Sensors/Actuators, and Discretes.  Revenues for MEMS-built sensors—including accelerometers, gyroscope devices, pressure sensors, and microphone chips—are expected to grow 10% in 2018 to $6.8 billion compared to nearly $6.1 billion in 2017, which was a 17% increase from $5.2 billion in 2016, the O-S-D Report says.  Shipments of MEMS-built sensors are forecast to rise about 11% in 2018 to 11.1 billion after growing 19% in 2016.

An additional $5.9 billion in sales is expected to be generated in 2018 by MEMS-built actuators, which use their microelectromechanical systems transducers to translate and initiate action—such as dispensing ink in printers or drugs in hospital patients, reflecting light on tilting micromirrors in digital projectors, or filtering radio-frequency signals by converting RF to acoustic waves across structures on chips.  Total sales of MEMS-built sensors and actuators are projected to grow 10% in 2018 to $12.7 billion after increasing nearly 18% in 2017 and 15% in 2016 (Figure 1).

Figure 1

In terms of unit volume, shipments of MEMS-built sensors and actuators are expected to grow by slightly less than 12% to 13.1 billion units worldwide after climbing 20% in 2017 and rising 11% in 2016.  Total revenues for MEMS-based sensors and actuators are projected to increase by a compound annual growth rate (CAGR) of 9.2% between 2017 and 2022 to reach $17.8 billion in the final year of the forecast, according to the 2018 O-S-D Report.  Worldwide shipments of these MEMS-built semiconductors are expected to grow by a CAGR of 11.4% in the 2017-2022 period to 20.2 billion units at the end of the forecast.

One of the biggest changes expected in the five-year forecast period will be greater stability in the average selling price for MEMS-built devices and significantly less ASP erosion than in the past 10 years. The ASP for MEMS-built sensors and actuators is projected to drop by a CAGR of -2.0% between 2017 and 2022 compared to a -4.7% annual rate of decline in the 2012-2017 period and the steep CAGR plunge of -13.6% between 2007 and 2012.  The ASP for MEMS-built devices is expected to be $0.88 in 2022 versus $0.97 in 2017, $1.24 in 2012, and $2.57 in 2007, says the 2018 report.

The spread of MEMS-based sensors and actuators into a broader range of new “autonomous and “intelligent” automated applications—such as those connected to the Internet of Things (IoT) and containing artificial intelligence (AI)—will help keep ASPs from falling as much as they did in the last 10 years.  IC Insights believes many MEMS-based semiconductors are becoming more specialized for certain applications, which will help insulate them from pricing pressures in the market.

Leti, a research institute of CEA Tech, and VSORA, which specializes in multi-core digital signal processor (DSP) design, today announced they have demonstrated the implementation of 5G New Radio (5G NR) Release 15 on a new DSP architecture that can dramatically reduce time to market of digital modems.

Defined by the 3rd Generation Partnership Project (3GPP), 5G NR is the air interface, or wireless communication link, for the next generation of cellular networks. It is expected to significantly improve connectivity experiences in 5G cellular networks. 3GPP Release 15 of the 5G system architecture, finalized in June 2018, provides the set of features and functionality needed for deploying a commercially operational 5G system.

This first implementation of 5G NR Release 15 physical layer on VSORA’s multi-core DSP demonstrates that it can address timely and complex systems like 5G NR, while providing a highly flexible software-defined development flow. The demonstration has shown that VSORA’s development suite provided an optimized DSP architecture, able to support the concurrent reception of representative 5G quality-of-service regimes covering extreme broadband, narrowband Internet of Things and ultra-low latency systems.

“This new DSP development flow allows signal-processing engineers to evaluate different implementations of their algorithms for cost, processing power, arithmetic precision and power consumption, well before the physical implementation,” said VSORA CEO Khaled Maalej. “The same development flow lets algorithm engineers and software engineers share the same environment and source code, dramatically accelerating time-to-market for Release 15 architectures.”

“VSORA’s innovations simplify the design flow, which eliminates the need to develop HDL-based co-processors,” said Benoit Miscopein, head of Leti’s wireless broadband systems lab. “Our demonstration also shows their product can support a system as hungry in terms of computational resources as the 5G NR modem.”

“VSORA’s added value is the very high flexibility that the company offers in terms of testing various implementation architectural trade-offs,” Miscopein added. “This speeds time-to-market by reducing the time required to converge towards a suitable DSP architecture. The approach proposed by VSORA is also flexible in the sense that the DSP can fulfill the requirements of the standard evolution, e.g. Releases 16 and 17, without redesigning a complete architecture.”

“With the coming 5G mobile standard, traditional DSP technology will run out of steam on multiple levels,” added Maalej. “Our aim is to become the reference point for state-of-the-art DSP applications. VSORA’s technology has the potential to revolutionize DSP architectures, transform the design and implementation processes, and ultimately enhance go-to-market strategies.”

Murata, a manufacturer of electronic components, is significantly increasing global production capacity, including most recently its factory located in Finland. After having recently purchased the previously leased buildings, the company will construct a new building of approximately 16,000 square meters. The new facility is scheduled to be completed by the end of 2019.

The total value of the investment is five billion yen and is underpinned by the growing worldwide demand for MEMS sensors used in the automotive industry and various health and industrial applications.

“The market for advanced driver-assistance systems, self-directed cars, healthcare, and other emerging technologies are expected to be significant growth drivers. MEMS sensors are critical solutions for these applications and deliver proven measurement accuracy and stability in a variety of conditions,” said Yuichiro Hayata, Managing Director for Murata Electronics Oy.

“With the construction of this new production building, we will significantly increase our MEMS sensors production capacity. Moreover, by responding to the strong demand of gyro sensors, accelerometers, and combo sensors in the automotive, industry and healthcare fields, this will strengthen our business base in the automotive market, industrial equipment and medical devices market, while contributing to the economy and employment of Finland,” stated Makoto Kawashima, Director of Sensor Product Division in Murata Manufacturing.

Developing operations with long-term perspective

With the factory expansion in Finland, Murata will strengthen both R&D and manufacturing operations with a long-term perspective for increasing utilization of this facility. The company currently employs 1,000 people in Finland and estimates to create 150–200 new jobs in 2018–2019.

Murata acquired the Finnish company VTI Technologies – today known as Murata Electronics Oy – in 2012. It is the only factory of Murata which manufactures MEMS sensors outside of Japan, and has experienced tremendous growth over the last 10 years. This site in Finland also hosts R&D space and one of the biggest clean room facilities in the country.

Murata Electronics Oy

Murata Electronics Oy is part of the Japanese Murata Group. The company is located in Vantaa and specializes in the development and manufacture of 3D MEMS (micro electro mechanical systems) sensors mainly for safety critical applications in automotive, as well as in healthcare and industrial applications. The company employs 1000 people in Finland.

SiFive, a provider of commercial RISC-V processor IP, today announced the first open-source RISC-V-based SoC platform for edge inference applications based on NVIDIA’s Deep Learning Accelerator (NVDLA) technology.

The demo will be shown this week at the Hot Chips conference and consists of NVDLA running on an FPGA connected via ChipLink to SiFive’s HiFive Unleashed board powered by the Freedom U540, the world’s first Linux-capable RISC-V processor. The complete SiFive implementation is well suited for intelligence at the edge, where high-performance with improved power and area profiles are crucial. SiFive’s silicon design capabilities and innovative business model enables a simplified path to building custom silicon on the RISC-V architecture with NVDLA.

NVIDIA open-sourced its leading deep learning accelerator over a year ago to spark the creation of more AI silicon solutions. Open-source architectures such as NVDLA and RISC-V are essential building blocks of innovation for Big Data and AI solutions.

“It is great to see open-source collaborations, where leading technologies such as NVDLA can make the way for more custom silicon to enhance the applications that require inference engines and accelerators,” said Yunsup Lee, co-founder and CTO, SiFive. “This is exactly how companies can extend the reach of their platforms.”

“NVIDIA open sourced its NVDLA architecture to drive the adoption of AI,” said Deepu Talla, vice president and general manager of Autonomous Machines at NVIDIA. “Our collaboration with SiFive enables customized AI silicon solutions for emerging applications and markets where the combination of RISC-V and NVDLA will be very attractive.”

The American Institute for Manufacturing Integrated Photonics (AIM Photonics), a public-private partnership headquartered in New York State to advance the nation’s photonics manufacturing capabilities, today announced that three National Science Foundation (NSF) funded grants totaling $1.2 million will enable collaborative photonics-centered R&D with the Rochester Institute of Technology (RIT), University of California-San Diego (UCSD), and University of Delaware (UD), respectively.

“AIM Photonics is thrilled to work with leading academic institutions including RIT, UCSD, and UD on these three separate, NSF-funded projects to collaboratively enable photonics-focused devices and capabilities that can allow for the more efficient identification of materials, as well as enhanced processes for manufacturing complex photonic devices and next-generation computing capabilities. We are proud to be the central driver of photonics-based advances that can significantly improve the technologies our society depends on,” said Dr. Michael Liehr, CEO of AIM Photonics.

“Partnering with AIM Photonics provides NSF-funded researchers unique access to world-class manufacturing facilities, stimulating innovation and enabling faculty to span the spectrum from fundamental research breakthroughs to translational advances in integrated photonics devices and circuits that directly impact society,” said Dr. Filbert Bartoli, Director of the Division of Electrical, Communications and Cyber Systems in NSF’s Directorate for Engineering.

Rochester Institute of Technology – AIM Photonics Project

The NSF awarded RIT $423,000 as part of the research project, “PIC: Hybrid Silicon Electronic-Photonic Integrated Neuromorphic Networks,” which will focus on realizing high-performance neural networks that will be integrated onto photonic chips for scalable and efficient architectures that, in tandem with integrated electronics, overcome challenges related to photonic memory and amplification—offering a hybrid, high-bandwidth computing approach for applications to autonomous systems, information networks, cybersecurity, and robotics. To develop these architectures, RIT will work with AIM Photonics to use its leading-edge PIC toolset, located at SUNY Polytechnic Institute in Albany, NY, and the AIM Photonics TAP facility in Rochester, NY—the world’s first 300mm open access PIC Test, Assembly, and Packaging (TAP) facility. The project will take place within RIT’s Future Photon Initiative (FPI) and Center for Human-Aware AI (CHAI).

This research effort will also provide educational opportunities for elementary through high school, undergraduate, and graduate students, and the AIM Photonics Academy will be able to disseminate the project’s findings to further increase understanding of this fast-growing area of research.

“We are excited to partner with AIM Photonics on this research project. The hybrid electronic-photonic neuromorphic chips my Co-PI (Professor Dhireesha Kudithipudi) and I are developing are directly enabled by the state-of-the-art PIC and TAP capabilities of AIM Photonics,” said Project Principal Investigator, Professor Stefan Preble at Rochester Institute of Technology’s Kate Gleason College of Engineering.

University of California-San Diego – AIM Photonics Project 

The NSF awarded UCSD $405,000 for research entitled, “PIC: Mobile in Situ Fourier Transform Spectrometer on a Chip,” which will enable UCSD to rapidly prototype and test miniaturized and mobile platform-embedded optical spectrometers that will excel at chemical identification. The initial design, fabrication, and validation of such a spectrometer on a Si chip have been recently reported in Nature Communications 9:665 (2018). This effort will continue and culminate with full-scale manufacturing runs at AIM Photonics’ foundry at the Albany Nanotech Complex. The integrated chip-scale Fourier transform spectrometer is to be fully CMOS compatible for use in mobile phones and other mobile platforms with potential impacts in areas ranging from environmental management, medicine, and security.

Undergraduate and graduate students at the institution will also be able to gain hands-on training as the research project simultaneously serves as a community outreach tool to inspire students attending middle and high schools.

Moreover, we are also developing an educational silicon photonics kit through the NSF’s ERC-CIAN (Engineering Research Center for Integrated Access Networks) and in collaboration with Tyndall National Institute at University College Cork (Ireland). The kit will initially be implemented in an undergraduate lab curriculum with the goal to prepare the future task force through hands-on experience in this evolving field,” said Project Principal Investigator, Professor Yeshaiahu Fainman, Cymer Chair in Advanced Optical Technologies and Distinguished Professor at the University of California-San Diego.

University of Delaware – AIM Photonics Project

The NSF awarded UD $360,000 as part of the research project, “PIC: Hybrid Integration of Electro-Optic and Semiconductor Photonic Devices and Circuits with the AIM Photonics Institute.” This effort will allow UD to work with AIM Photonics to leverage the initiative’s expertise and state-of-the-art foundry for the development of new heterogeneous manufacturing processes for photonic devices, using new materials such as Lithium Niobate (LiNbO3), which can then be directly integrated with silicon CMOS systems for photonic devices and chip scale systems.

More specifically, the effort aims to realize high performance RF-photonic devices such as ultra-high frequency modulators (> 100 GHz) that are used in data networks; high-efficiency chip-scale routers for advanced data centers; and high-power phased array antenna photonic feed networks that are compatible with older and next-generation wireless communications; in addition to enabling a number of other wide-ranging commercial applications.

“The heterogeneous integration of LiNbO3 with Silicon Photonics allows for the use of the best properties of both material systems, thereby enabling truly innovative systems for countless emerging applications,” said Project Principal Investigator, Dr. Dennis Prather, Engineering Alumni Professor at the University of Delaware.

AIM Photonics features research, development, and commercialization nodes in Albany, NY, at SUNY Polytechnic Institute, as well as in Rochester, NY, where state-of-the-art equipment and tools are being installed at AIM Photonics’ TAP facility. The initiative also includes an outreach and referral network with the University of Rochester, Rochester Institute for Technology, Columbia University, Massachusetts Institute of Technology, University of California – Santa Barbara, University of Arizona, as well as New York State community colleges. In total AIM Photonics includes more than 100 signed members, partners, and additional interested collaborators.