Category Archives: Applications

By Serena Brischetto

SEMI spoke with Christian Mandl, Senior Director for Human Machine Interface (HMI), Infineon Technologies AG, ahead of the European MEMS & Sensors Summit. Mandl discussed how the sensing capabilities of machines are getting ever closer to the five human senses, allowing machines to comprehend the environment before acting.

SEMI: What’s it like to lead the Human Machine Interface (HMI) group at Infineon?

Mandl: This example of contextually aware smart devices describes our challenge very well. Devices need to be aware of their surroundings to better adapt their configurations to each specific user. In other words, provide consumers with a more personalized experience. If machines understand the context around them better, their decision-making capabilities are improved, just like humans! Sensor fusion is the key enabler to contextual awareness. Thanks to sensor fusion, machines can provide more reliable feedback based on data from different sensors taken in the same situation, thus making the system more robust. Compared to traditional devices, false positives and false negatives are reduced to make the whole solution smarter.

The challenge we are addressing within the HMI group at Infineon is to enable systems that are aware of their surroundings by combining our best-in-class sensors with sophisticated machine learning algorithms. We create solutions that can better sense the environment around the device, to then trigger user-specific reactions. This is what we call intuitive sensing.

SEMI: Will you elaborate on this challenge? What are the greatest difficulties in combining existing technologies and devices with sensors?

Mandl:

The traditional approach to add sensors to technology has been very simplistic. For example, radar sensors for presence detection typically provide you with the distance to the closest object and trigger a specific action. This approach works but is limited in the amount of use cases it can address since it is not customizable. By using sensor fusion with the sophisticated machine learning techniques, the solutions are becoming robust and stable. When equipping smart speakers with our microphones and radar sensors, they can detect a user’s presence and track location and motion. When adding advanced algorithms such as beam-forming, the audio reception beam can be steered towards the user and filter out noises for more clear understanding of commands.

The market is demanding more of these innovative ready-to-use solutions. Delivering these solutions requires a thorough evaluation based on very strong knowledge of the sensing elements and the raw data they provide. Infineon has a leading edge here, with more than 40 years’ experience in sensing solutions and a deep-rooted system understanding, to create the ready-to-use sensor solutions demanded by the market.

SEMI: You mentioned that data is key to technological development. Re-innovating our world depends on the quality of valuable and secured data about the environment, and what is done with it. How do you make this possible?

Mandl: Indeed, collecting valuable and trustworthy information is critical for any application, as mislabeled or incorrect data reduces the accuracy of any solution. Using reliable and secured sensors is the first critical step towards high quality data. This is where Infineon´s XENSIVTMsensor portfolio plays a crucial role. Our sensors are exceptionally reliable thanks to our industry-leading technologies, and they are the perfect fit for various applications in automotive, industrial and consumer markets. With clean-labeled data in hand and a good understanding of each use-case, we can drastically improve the probability of detecting an event.

SEMI: Can you further explain the sensor fusion concepts that you are working on to connect the real world with the digital world by sensors?

Mandl: A good example is the integration of radar sensors into smart speakers, which improves tremendously the capabilities of current devices to understand the real world and enables numerous use cases that were not possible before.

Starting with keyword-less interactions with technology, the next generation of IoT devices with capabilities to locate and track users will be capable of adjusting the intelligent actions to your position. For example, when we ask our smart speaker in the living room to “turn on the lights” or “play music,” only the lights and speakers in the user´s surroundings should be activated, and not the ones in the kitchen. When walking into another room, the music and light should be capable of following the user´s position and shift flawlessly into the new room. Precise presence detection and tracking by radar will enable optimal interaction with consumers for a more clear understanding of commands and a flawless user experience. It should also create power savings for the smart home by switching off lights and other devices when no one is around.

SEMI: Machines sensing capabilities are getting closer to the five human senses as they understand the environment before acting. What will the new wave of applications include with regard to consumer markets?

Mandl: The potential of sensor fusion to enhance the sensing capabilities of machines cannot yet be imagined. There are innumerable use cases that can be enabled with the right combination of sensors, data processing algorithms and machine learning tools. Smart devices will be more aware of the situation and anticipate their actions to user commands, leading to the era of intuitive sensing. Imagine a world where you can communicate with your smart device like you talk to another human being!

Thanks to the advanced intelligence that we bring with our HMI group, devices will have a sensor brain for use case-specific matching of multiple sensor fusion data with the customer needs for each application. Not only the smart speaker market will experience this transformation, but also other IoT devices in areas such as home security or user authentication, or wearables for optimized wellbeing tracking and monitoring. Devices will be capable of achieving more if provided with the right technology combination. Sensor fusion will enable technology to take better and smarter decisions in complex situations, in some cases even better than humans would.

SEMI: What do you expect from European MEMS & Sensors Summit 2018 and why do you recommend attending in Grenoble?

Mandl: This event is a great opportunity not only to stay informed and see what is happening in the MEMS and sensors industry, but also to meet current and new partners and customers. Attending is important to observe how industry leaders are working towards the latest market trends, and discuss what else can be done to make life easier, safer and greener for everyone.

Serena Brischetto is a marketing and communications manager at SEMI Europe.

Originally published on the SEMI blog.

By Dr. Eric Mounier

2017 was a good year for the MEMS and sensors business, and that upward trend should continue. We forecast extended strong growth for the sensors and actuators market, reaching more than $100 billion in 2023 for a total of 185 billion units. Optical sensors, especially CMOS image sensors, will have the lion’s share with almost 40 percent of market value. MEMS will also play an important role in that growth: During 2018–2023, the MEMS market will experience 17.5 percent growth in value and 26.7 percent growth in units, with the consumer market accounting for more than 50 percent(1)share overall.

Evolution of sensors

Sensors were first developed and used for physical sensing: shock, pressure, then acceleration and rotation. Greater investment in R&D spurred MEMS’ expansion from physical sensing to light management (e.g., micromirrors) and then to uncooled infrared sensing (e.g., microbolometers). From sensing light to sensing sound, MEMS microphones formed the next wave of MEMS development. MEMS and sensors are entering a new and exciting phase of evolution as they transcend human perception, progressing toward ultrasonic, infrared and hyperspectral sensing.

Sensors can help us to compensate when our physical or emotional sensing is limited in some way. Higher-performance MEMS microphones are already helping the hearing-impaired. Researchers at Arizona State University are among those developing cochlear implants — featuring piezoelectric MEMS sensors — which may one day restore hearing to those with significant hearing loss.

The visually impaired may take heart in knowing that researchers at Stanford University are collaborating on silicon retinal implants. Pixium Vision began clinical trials in humans in 2017 with its silicon retinal implants.

It’s not science fiction to think that we will use future generations of sensors for emotion/empathy sensing. Augmenting our reality, such sensing could have many uses, perhaps even aiding the ability of people on the autism spectrum to more easily interpret the emotions of others.

Through my years in the MEMS industry, I have identified three distinct eras in MEMS’ evolution:

  1. The “detection era” in the very first years, when we used simple sensors to detect a shock.
  2. The “measuring era” when sensors could not only sense and detect but also measure (e.g., a rotation).
  3. The “global-perception awareness era” when we increasingly use sensors to map the environment. We conduct 3D imaging with Lidar for autonomous vehicles. We monitor air quality using environmental sensors. We recognize gestures using accelerometers and/or ultrasonics. We implement biometry with fingerprint and facial recognition sensors. This is possible thanks to sensor fusion of multiple parameters, together with artificial intelligence.

Numerous technological breakthroughs are responsible for this steady stream of advancements: new sensor design, new processes and materials, new integration approaches, new packaging, sensor fusion, and new detection principles.

Global Awareness Sensing

The era of global awareness sensing is upon us. We can either view global awareness as an extension of human sensing capabilities (e.g., adding infrared imaging to visible) or as beyond-human sensing capabilities (e.g., machines with superior environmental perception, such as Lidar in a robotic vehicle). Think about Professor X in Marvel’s universe, and you can imagine how human perception could evolve in the future!

Some companies envisioned global awareness from the start. Movea (now part of TDK InvenSense), for example, began their development with inertial MEMS. Others implemented global awareness by combining optical sensors such as Lidar and night-vision sensors for robotic cars. A third contingent grouped environmental sensors (gas, particle, pressure, temperature) to check air quality. The newest entrant in this group, the particle sensor, could play an especially important role in air-quality sensing, particularly in wearable devices.

Driven by increasing societal concern over mounting evidence of global air-quality deterioration, air pollution has become a major topic in our society. Studies show that there is no safe level of particulates. Instead, for every increase in concentration of PM10 or PM2.5 inhalable particles in the air, the lung cancer rate is rising proportionately. Combining a particle sensor with a mapping application in a wearable could allow us to identify the locations of the most polluted urban zones.

The Need for Artificial Intelligence

To realize global awareness, we also need artificial intelligence (AI), but first, we have challenges to solve. Activity tracking, for example, requires accurate live classification of AI data. Relegating all AI processing to a main processor, however, would consume significant CPU resources, reducing available processing power. Likewise, storing all AI data on the device would push up storage costs. To marry AI with MEMS, we must do the following:

  1. Decouple feature processing from the execution of the classification engine to a more powerful external processor.
  2. Reduce storage and processing demands by deploying only the features required for accurate activity recognition.
  3. Install low-power MEMS sensors that can incorporate data from multiple sensors (sensor fusion) and enable pre-processing for always-on execution.
  4. Retrain the model with system-supported data that can accurately identify the user’s activities.

There are two ways to add AI and software in mobile and automotive applications. The first is a centralized approach, where sensor data is processed in the auxiliary power unit (APU) that contains the software. The second is a decentralized approach, where the sensor chip is localized in the same package, close to the software and the AI (in the DSP for a CMOS image sensor, for example). Whatever the approach, MEMS and sensors manufacturers need to understand AI, although they are unlikely to gain much value at the sensor-chip level.

Heading to an Augmented World

We have achieved massive progress in sensor development over the years and are now reaching the point when sensors can mimic or augment most of our perception: vision, hearing, touch, smell and even emotion/empathy as well as some aesthetic senses. We should realize that humans are not the only ones to benefit from these developments. Enhanced perception will also allow robots to help us in our daily lives (through smart transportation, better medical care, contextually aware environments and more). We need to couple smart sensors’ development with AI to further enhance our experiences with the people, places and things in our lives.

About the author

With almost 20 years’ experience in MEMS, sensors and photonics applications, markets, and technology analyses, Dr. Eric Mounier provides in-depth industry insight into current and future trends. As a Principal Analyst, Technology & Markets, MEMS & Photonics, in the Photonics, Sensing & Display Division, he contributes daily to the development of MEMS and photonics activities at Yole Développement (Yole). He is involved with a large collection of market and technology reports, as well as multiple custom consulting projects: business strategy, identification of investment or acquisition targets, due diligence (buy/sell side), market and technology analyses, cost modeling, and technology scouting, etc.

Previously, Mounier held R&D and marketing positions at CEA Leti (France). He has spoken in numerous international conferences and has authored or co-authored more than 100 papers. Mounier has a Semiconductor Engineering Degree and a PhD in Optoelectronics from the National Polytechnic Institute of Grenoble (France).

Mounier is a featured speaker at SEMI-MSIG European MEMS & Sensors Summit, September 20, 2018 in Grenoble, France.

Originally published on the SEMI blog.

Products built with microelectromechanical systems (MEMS) technology are forecast to account for 73% of the $9.3 billion semiconductor sensor market in 2018 and about 47% of the projected 24.1 billion total sensor units to be shipped globally this year, according to IC Insights’ 2018 O-S-D Report—A Market Analysis and Forecast for Optoelectronics, Sensors/Actuators, and Discretes.  Revenues for MEMS-built sensors—including accelerometers, gyroscope devices, pressure sensors, and microphone chips—are expected to grow 10% in 2018 to $6.8 billion compared to nearly $6.1 billion in 2017, which was a 17% increase from $5.2 billion in 2016, the O-S-D Report says.  Shipments of MEMS-built sensors are forecast to rise about 11% in 2018 to 11.1 billion after growing 19% in 2016.

An additional $5.9 billion in sales is expected to be generated in 2018 by MEMS-built actuators, which use their microelectromechanical systems transducers to translate and initiate action—such as dispensing ink in printers or drugs in hospital patients, reflecting light on tilting micromirrors in digital projectors, or filtering radio-frequency signals by converting RF to acoustic waves across structures on chips.  Total sales of MEMS-built sensors and actuators are projected to grow 10% in 2018 to $12.7 billion after increasing nearly 18% in 2017 and 15% in 2016 (Figure 1).

Figure 1

In terms of unit volume, shipments of MEMS-built sensors and actuators are expected to grow by slightly less than 12% to 13.1 billion units worldwide after climbing 20% in 2017 and rising 11% in 2016.  Total revenues for MEMS-based sensors and actuators are projected to increase by a compound annual growth rate (CAGR) of 9.2% between 2017 and 2022 to reach $17.8 billion in the final year of the forecast, according to the 2018 O-S-D Report.  Worldwide shipments of these MEMS-built semiconductors are expected to grow by a CAGR of 11.4% in the 2017-2022 period to 20.2 billion units at the end of the forecast.

One of the biggest changes expected in the five-year forecast period will be greater stability in the average selling price for MEMS-built devices and significantly less ASP erosion than in the past 10 years. The ASP for MEMS-built sensors and actuators is projected to drop by a CAGR of -2.0% between 2017 and 2022 compared to a -4.7% annual rate of decline in the 2012-2017 period and the steep CAGR plunge of -13.6% between 2007 and 2012.  The ASP for MEMS-built devices is expected to be $0.88 in 2022 versus $0.97 in 2017, $1.24 in 2012, and $2.57 in 2007, says the 2018 report.

The spread of MEMS-based sensors and actuators into a broader range of new “autonomous and “intelligent” automated applications—such as those connected to the Internet of Things (IoT) and containing artificial intelligence (AI)—will help keep ASPs from falling as much as they did in the last 10 years.  IC Insights believes many MEMS-based semiconductors are becoming more specialized for certain applications, which will help insulate them from pricing pressures in the market.

Leti, a research institute of CEA Tech, and VSORA, which specializes in multi-core digital signal processor (DSP) design, today announced they have demonstrated the implementation of 5G New Radio (5G NR) Release 15 on a new DSP architecture that can dramatically reduce time to market of digital modems.

Defined by the 3rd Generation Partnership Project (3GPP), 5G NR is the air interface, or wireless communication link, for the next generation of cellular networks. It is expected to significantly improve connectivity experiences in 5G cellular networks. 3GPP Release 15 of the 5G system architecture, finalized in June 2018, provides the set of features and functionality needed for deploying a commercially operational 5G system.

This first implementation of 5G NR Release 15 physical layer on VSORA’s multi-core DSP demonstrates that it can address timely and complex systems like 5G NR, while providing a highly flexible software-defined development flow. The demonstration has shown that VSORA’s development suite provided an optimized DSP architecture, able to support the concurrent reception of representative 5G quality-of-service regimes covering extreme broadband, narrowband Internet of Things and ultra-low latency systems.

“This new DSP development flow allows signal-processing engineers to evaluate different implementations of their algorithms for cost, processing power, arithmetic precision and power consumption, well before the physical implementation,” said VSORA CEO Khaled Maalej. “The same development flow lets algorithm engineers and software engineers share the same environment and source code, dramatically accelerating time-to-market for Release 15 architectures.”

“VSORA’s innovations simplify the design flow, which eliminates the need to develop HDL-based co-processors,” said Benoit Miscopein, head of Leti’s wireless broadband systems lab. “Our demonstration also shows their product can support a system as hungry in terms of computational resources as the 5G NR modem.”

“VSORA’s added value is the very high flexibility that the company offers in terms of testing various implementation architectural trade-offs,” Miscopein added. “This speeds time-to-market by reducing the time required to converge towards a suitable DSP architecture. The approach proposed by VSORA is also flexible in the sense that the DSP can fulfill the requirements of the standard evolution, e.g. Releases 16 and 17, without redesigning a complete architecture.”

“With the coming 5G mobile standard, traditional DSP technology will run out of steam on multiple levels,” added Maalej. “Our aim is to become the reference point for state-of-the-art DSP applications. VSORA’s technology has the potential to revolutionize DSP architectures, transform the design and implementation processes, and ultimately enhance go-to-market strategies.”

Semiconductor industry growth drivers artificial intelligence (AI), Internet of Things (IoT) and automotive take center stage as more than 45,000 visitors gather at SEMICON Taiwan starting today. Showcasing the latest developments and innovations in the microelectronics supply chain, SEMICON Taiwan – September 5-7, at the Taipei Nanang Convention Center – is the largest semiconductor supply chain event in Taiwan. The event opens with Taiwan’s semiconductor industry revenue poised to grow 6 percent to $84.76 billion U.S. dollars ($2.6 trillion NT dollars) in 2018.

Taiwan leads the world in semiconductor foundry, package and test services and is second in chip design. SEMICON Taiwan features more than 2,000 booths and 680 exhibitors from around the world.

SEMICON Taiwan 2018’s IC 60 – Master Forum, a special event co-organized by SEMI and the Ministry of Science and Technology (MOST), celebrates the 60th anniversary of the birth of the integrated circuit. With their sights set on emerging opportunities, Taiwan semiconductor industry luminaries will highlight the pioneering spirit and tenacious pursuit of smaller, faster, lower-power devices that gave rise to today’s ICs and are the heart of the digital economy.

Premier of Executive Yuan, Ching-de Lai will speak at today’s opening ceremony to highlight the administration’s support for the sustainable growth of Taiwan’s semiconductor industry. Semiconductor heavyweights, academic professionals, and other officials – all key players in Taiwan’s semiconductor industry – are also expected at the ceremony.

With semiconductor processes ramping to the 5nm technology and below and novel techniques such as heterogeneous integration ushering in improvements to chip functionality, SEMICON Taiwan is the ideal platform for connecting, collaborating and innovating to take advantage of future opportunities.

New show floor features at SEMICON Taiwan include the Smart Manufacturing Journey to highlight future trends and opportunities in smart semiconductor manufacturing and the Smart Workforce Pavilion, which promotes the development of the semiconductor industry talent pipeline. In addition, 22 theme and regional pavilions and a series of forums and networking events spotlight market trends and cutting-edge technologies and open opportunities for cross-field and cross-region collaboration.

“Semiconductors are the backbone of the Taiwan’s economic growth and its leadership position in the global semiconductor industry,” said Terry Tsao, president of SEMI Taiwan. “As critical partners, Taiwan policy makers continue to work closely with the industry and will propose a series of reforms across tax, trade, talent, and technology to enrich the region’s investment climate and encourage industry upgrades.”

“Taiwan is in a strong position to help power future semiconductor industry growth with its highly specialized, fully integrated supply chains and years of management experience,” Tsao said. “Taiwan will long remain a key strategic player in the global semiconductor industry.”

For more event information, please visit SEMICON Taiwan. For a SEMICON Taiwan 2018 overview including featured speakers and the list of international forums, please click here.

NXP acquires OmniPHY


September 4, 2018

NXP Semiconductors N.V. (NASDAQ: NXPI), the world’s largest supplier of automotive semiconductors, has acquired OmniPHY, a provider of automotive Ethernet subsystem technology. The company’s expertise includes automotive Ethernet, a technology that enables the rapid data transfer required for autonomous driving. OmniPHY’s advanced high-speed technology, combined with NXP’s leading portfolio and heritage in vehicle networks, uniquely positions NXP to deliver the next-generation of data transfer solutions to carmakers. Financial terms of the transaction are not disclosed.

An automotive networking revolution is underway, driven by the need for higher data capacity and speed to meet the requirements of increasingly autonomous and connected vehicles. New advanced autonomous driving systems will require gigabit data speeds and beyond. Current plans for next-generation vehicles call for eight or more cameras, high definition radar, lidar and V2X capability, all of which generate steep data challenges for current car networks. These requirements, combined with the modern vehicle’s need to offload data to enable the new business opportunities of the connected car, will soon make terabyte levels of data processing commonplace.

“One of the vexing questions of the Autonomous Age is how to move data around the car as fast as possible,” said Ian Riches, executive director in the Strategy Analytics Global Automotive Practice. “Cameras and displays will ramp the number of high-speed links in the car to 150 million by 2020 and by 2030 autonomous car systems will aggressively drive that number to 1.1 billion high-speed links.”

As the self-driving ecosystem works to deliver on emerging automotive data requirements, many have turned to enterprise networking solutions as a stopgap measure for testing. Yet long-term solutions will need to be automotive grade and of a size and weight that make their implementation feasible. NXP’s acquisition of OmniPHY, which has already begun to translate 1000BASE-T1 Ethernet for the automotive space, will give NXP a significant position in this rapidly evolving area.

“Our heritage in vehicle networks is rich and with our leadership positions in CAN, LIN, and FlexRay, we hold a unique viewpoint on automotive networks,” said Alexander E. Tan, vice president and general manager of Automotive Ethernet Solutions, NXP. “The team and technology from OmniPHY give us the missing piece in an extensive high-bandwidth networking portfolio.”

OmniPHY is a pioneer in high-speed automotive Ethernet IP and automotive qualified IP for 100BASE-T1 and 1000BASE-T1 standards. Over its six-year history, it has worked with some of the largest consumer companies in the world and has developed competitive 1st-silicon-right solutions for emerging markets like automotive and industrial Ethernet. OmniPHY interface IP and communication technology along with NXP’s automotive portfolio will form a “one-stop shop” for automotive Ethernet. The companies’ technology synergies will center on 1.25-28Gbps PHY designs and 10-, 100- and 1000BASE-T1 Ethernet in advanced processes.

“We are very excited to join NXP – a leader in automotive electronics, for a front-row seat to the autonomous driving revolution, one that will deliver profound change to the way people live,” said Ritesh Saraf, CEO of OmniPHY. “The combination of our teams and technology will accelerate and advance the delivery of automotive Ethernet solutions providing our customers with high quality and world-class automotive Ethernet innovation.”

Myeloperoxidase – an enzyme naturally found in our lungs – can biodegrade pristine graphene, according to the latest discovery of Graphene Flagship partners in CNRS, University of Strasbourg (France), Karolinska Institute (Sweden) and University of Castilla-La Mancha (Spain). Among other projects, the Graphene Flagship designs flexible biomedical electronic devices that will interface with the human body. Such applications require graphene to be biodegradable, so it can be expelled from the body.

To test how graphene behaves within the body, researchers analysed how it was broken down with the addition of a common human enzyme – myeloperoxidase or MPO. If a foreign body or bacteria is detected, neutrophils surround it and secrete MPO, thereby destroying the threat. Previous work by Graphene Flagship partners found that MPO could successfully biodegrade graphene oxide.

However, the structure of non-functionalized graphene was thought to be more resistant to degradation. To test this, the team looked at the effects of MPO ex vivo on two graphene forms; single- and few-layer.

Alberto Bianco, researcher at Graphene Flagship Partner CNRS, explains: “We used two forms of graphene, single- and few-layer, prepared by two different methods in water. They were then taken and put in contact with myeloperoxidase in the presence of hydrogen peroxide. This peroxidase was able to degrade and oxidise them. This was really unexpected, because we thought that non-functionalized graphene was more resistant than graphene oxide.”

Rajendra Kurapati, first author on the study and researcher at Graphene Flagship Partner CNRS, remarks how “the results emphasize that highly dispersible graphene could be degraded in the body by the action of neutrophils. This would open the new avenue for developing graphene-based materials.”

With successful ex-vivo testing, in-vivo testing is the next stage. Bengt Fadeel, professor at Graphene Flagship Partner Karolinska Institute believes that “understanding whether graphene is biodegradable or not is important for biomedical and other applications of this material. The fact that cells of the immune system are capable of handling graphene is very promising.”

Prof. Maurizio Prato, the Graphene Flagship leader for its Health and Environment Work Package said that “the enzymatic degradation of graphene is a very important topic, because in principle, graphene dispersed in the atmosphere could produce some harm. Instead, if there are microorganisms able to degrade graphene and related materials, the persistence of these materials in our environment will be strongly decreased. These types of studies are needed.” “What is also needed is to investigate the nature of degradation products,” adds Prato. “Once graphene is digested by enzymes, it could produce harmful derivatives. We need to know the structure of these derivatives and study their impact on health and environment,” he concludes.

Prof. Andrea C. Ferrari, Science and Technology Officer of the Graphene Flagship, and chair of its management panel added: “The report of a successful avenue for graphene biodegradation is a very important step forward to ensure the safe use of this material in applications. The Graphene Flagship has put the investigation of the health and environment effects of graphene at the centre of its programme since the start. These results strengthen our innovation and technology roadmap.”

The 35 must-watch technologies represented on the Gartner Inc. Hype Cycle for Emerging Technologies, 2018 revealed five distinct emerging technology trends that will blur the lines between humans and machines. Emerging technologies, such as artificial intelligence (AI), play a critical role in enabling companies to be ubiquitous, always available, and connected to business ecosystems to survive in the near future.

“Business and technology leaders will continue to face rapidly accelerating technology innovation that will profoundly impact the way they engage with their workforce, collaborate with their partners, and create products and services for their customers,” said Mike J. Walker, research vice president at Gartner. “CIOs and technology leaders should always be scanning the market along with assessing and piloting emerging technologies to identify new business opportunities with high impact potential and strategic relevance for their business.”

The Hype Cycle for Emerging Technologies report is the longest-running annual Gartner Hype Cycle, providing a cross-industry perspective on the technologies and trends that business strategists, chief innovation officers, R&D leaders, entrepreneurs, global market developers and emerging-technology teams should consider in developing emerging-technology portfolios.

The Hype Cycle for Emerging Technologies is unique among most Gartner Hype Cycles because it garners insights from more than 2,000 technologies into a succinct set of 35 emerging technologies and trends. This Hype Cycle specifically focuses on the set of technologies that is showing promise in delivering a high degree of competitive advantage over the next five to 10 years (see Figure 1).

Source: Gartner (August 2018)

Five Emerging Technology Trends

Democratized AI

AI technologies will be virtually everywhere over the next 10 years. While these technologies enable early adopters to adapt to new situations and solve problems that have not been encountered previously, these technologies will become available to the masses — democratized. Movements and trends like cloud computing, the “maker” community and open source will eventually propel AI into everyone’s hands.

This trend is enabled by the following technologies: AI Platform as a Service (PaaS), Artificial General Intelligence, Autonomous Driving (Levels 4 and 5), Autonomous Mobile Robots, Conversational AI Platform, Deep Neural Nets, Flying Autonomous Vehicles, Smart Robots, and Virtual Assistants.

“Technologies representing democratized AI populate three out of five sections on the Hype Cycle, and some of them, such as deep neural nets and virtual assistants, will reach mainstream adoption in the next two to five years,” said Mr. Walker. “Other emerging technologies of that category, such as smart robots or AI PaaS, are also moving rapidly through the Hype Cycle approaching the peak and will soon have crossed it.”

Digitalized Ecosystems

Emerging technologies require revolutionizing the enabling foundations that provide the volume of data needed, advanced compute power and ubiquity-enabling ecosystems. The shift from compartmentalized technical infrastructure to ecosystem-enabling platforms is laying the foundations for entirely new business models that are forming the bridge between humans and technology.

This trend is enabled by the following technologies: Blockchain, Blockchain for Data Security, Digital Twin, IoT Platform and Knowledge Graphs.

“Digitalized ecosystem technologies are making their way to the Hype Cycle fast,” said Walker. “Blockchain and IoT platforms have crossed the peak by now, and we believe that they will reach maturity in the next five to 10 years, with digital twins and knowledge graphs on their heels.”

Do-It-Yourself Biohacking

Over the next decade, humanity will begin its “transhuman” era: Biology can then be hacked, depending on lifestyle, interests and health needs. Biohacking falls into four categories: technology augmentation, nutrigenomics, experimental biology and grinder biohacking. However, questions remain about how far society is prepared to accept these kinds of applications and what ethical issues they create.

This trend is enabled by the following technologies: Biochips, Biotech — Cultured or Artificial Tissue, Brain-Computer Interface, Augmented Reality, Mixed Reality and Smart Fabrics.

Emerging technologies in do-it-yourself biohacking are moving rapidly through the Hype Cycle. Mixed reality is making its way to the Trough of Disillusionment, and augmented reality almost reached the bottom. Those pioneers will be followed by biochips, which have just reached the peak and will have moved on to the plateau in five to 10 years.

Transparently Immersive Experiences

Technology will continue to become more human-centric to the point where it will introduce transparency between people, businesses and things. These technologies extend and enable smarter living, work, and other spaces we encounter.

This trend is enabled by the following technologies: 4D Printing, Connected Home, Edge AI, Self-Healing System Technology, Silicon Anode Batteries, Smart Dust, Smart Workspace and Volumetric Displays.

“Emerging technologies representing transparently immersive experiences are mostly on their way to the peak or — in the case of silicon anode batteries — just crossed it,” said Mr. Walker. “The smart workspace has moved along quite a bit and is about to peak in the near future.”

Ubiquitous Infrastructure

Infrastructure is no longer in the way of obtaining an organization’s goals. The advent and mass popularity of cloud computing and its many variations have enabled an always-on, available and limitless infrastructure compute environment.

This trend is enabled by the following technologies: 5G, Carbon Nanotube, Deep Neural Network ASICs, Neuromorphic Hardware and Quantum Computing.

Technologies supporting ubiquitous infrastructure are on track to reach the peak and move fast along the Hype Cycle. 5G and deep neural network ASICs, in particular, are expected to reach the plateau in the next two to five years.

Gartner clients can read more in the report “Hype Cycle for Emerging Technologies, 2018.” This research is part of the Gartner Trend Insight Report, “2018 Hype Cycles: Riding the Innovation Wave”. With profiles of technologies, services and disciplines spanning over 100 Hype Cycles, this Trend Insight Report is designed to help CIOs and IT leaders respond to the opportunities and threats affecting their businesses, take the lead in technology-enabled business innovations and help their organizations define an effective digital business strategy.

Additional analysis on emerging technologies will be presented during Gartner Symposium/ITxpo, the world’s most important gathering of CIOs and other senior IT executives. IT executives rely on these events to gain insight into how their organizations can use IT to overcome business challenges and improve operational efficiency. Follow news and updates from the events on Twitter using #GartnerSYM.

Upcoming dates and locations for Gartner Symposium/ITxpo include:

17-20 September 2018: Cape Town, South Africa

14-18 October 2018: Orlando, Florida

22-25 October 2018: Sao Paulo, Brazil

29 October-1 November 2018: Gold Coast, Australia

4-8 November 2018: Barcelona, Spain

12-14 November 2018: Tokyo, Japan

13-16 November 2018: Goa, India

4-6 March 2019: Dubai, UAE

3-6 June 2019: Toronto, Canada

Samsung Electronics Co., Ltd., a world leader in advanced semiconductor technology, today announced its new narrowband (NB) Internet-of-Things (IoT) solution, Exynos i S111.

The new NB-IoT solution offers extremely wide coverage, low-power operation, accurate location feedback and strong security, optimized for today’s real-time tracking applications such as safety wearables or smart meters. The solution includes a modem, processor, memory and Global Navigation Satellite System (GNSS) into a single chip design to enhance efficiency and flexibility for connected device manufacturers.

“IoT will be able to evolve to offer new features beyond the conventional household space with IoT-dedicated solutions that present a broad range of opportunities,” said Ben Hur, vice president of System LSI marketing at Samsung Electronics. “Exynos i S111’s highly secure and efficient communication capabilities will bring more exciting NB-IoT applications to life.”

As IoT grows to be a part of our everyday lives, some connected devices share useful information instantly in high volumes, but some transmit data in small nuggets over a long period of time. Popular radio connectivity systems such as Bluetooth and ZigBee are suitable for short-range scenarios within confined spaces such as in the home or a building, and broadband communications are commonly used for mobile devices that demand high data rates. On the other hand, NB-IoT supports applications that require reliable low-power communication and wide-range coverage for small-sized data.

To cover long distances with high reliability, as a standard, NB-IoT adopts a data retransmission mechanism that continuously transmits data until a successful transfer, or up to a set number of retransmits. With a high number of these retransmit sessions, the S111 is able to cover the distance of 10-kilometers (km) or more.

Exynos i S111 incorporates a modem capable of LTE Rel. 14 support that can transmit data at 127-kilobits-per-second (kbps) for downlink and 158kbps uplink, and can operate in standalone, in-band and guard-band deployments.

For long standby periods, the S111 utilizes power saving mode (PSM) and expanded discontinuous reception (eDRX), which keeps the device dormant for long periods of time of 10 years and more, depending on application and use-cases. Exynos i S111 also has an integrated Global Navigation Satellite System (GNSS) and supports Observed Time Difference of Arrival (OTDOA), a positioning technique using cellular towers, for highly accurate and seamless real-time tracking.

Transmitted data are kept secure and private with the S111, as the solution utilizes a separate Security Sub-System (SSS) hardware block along with a Physical Unclonable Function (PUF) that creates a unique identity for each chipset.

Following the successful launch of the company’s first IoT solution, Exynos i T200, in 2017, Samsung plans to continue expanding the ‘Exynos i’ lineup with offerings specially tailored for narrowband networks.

Semtech Corporation (Nasdaq: SMTC), a supplier of high performance analog and mixed-signal semiconductors and advanced algorithms, announced that EasyLinkin, a high-tech enterprise specializing in the research and development of low power wide area network (LPWAN) technologies, has incorporated Semtech’s LoRa® devices and wireless radio frequency technology (LoRa Technology) into its IoT smart metering solutions to improve facility management.

LoRa-enabled smart meters from EasyLinkin monitor utility usage rates in real-time to provide facilities more visibility to reduce operating costs. EasyLinkin’s LoRa-based products are easy to install on existing meters and are currently being deployed across China in both public and private LoRaWAN™ networks. Utility companies are able to monitor utility usage in real-time to reduce operational costs and conserve natural resources.

“Our customers are able to analyze their usage through real-time data collected by our smart metering solutions to reduce operational costs,” said Kun Xu, Co-Founder & Executive President at EasyLinkin. “This was enabled and would not be possible without Semtech’s LoRa Technology, which provides the ideal IoT solution for utility monitoring and management. The easy deployment and flexibility of LoRa Technology enables consistent data transmission in either a private or public network.”

“With an increased emphasis on sustainability, there’s an absolute need for IoT solutions, like Semtech’s LoRa Technology, to solve real-world environmental challenges,” said Vivek Mohan, Director of IoT, Semtech’s Wireless and Sensing Products Group. “Integrating LoRa Technology into EasyLinkin’s metering devices provides an IoT solution that reduces operational costs like maintenance and allows an inside look into utility consumption, letting consumers change their usage accordingly.”

About Semtech’s LoRa® Devices and Wireless RF Technology

Semtech’s LoRa devices and wireless radio frequency technology is a widely adopted long-range, low-power solution for IoT that gives telecom companies, IoT application makers and system integrators the feature set necessary to deploy low-cost, interoperable IoT networks, gateways, sensors, module products, and IoT services worldwide. IoT networks based on the LoRaWAN™ specification have been deployed in over 100 countries and Semtech is a founding member of the LoRa Alliance™, the fastest growing IoT Alliance for Low Power Wide Area Network applications. To learn more about how LoRa enables IoT, visit Semtech’s LoRa site and join the LoRa Community to access free training as well as an online industry catalog showcasing the products you need for building your ideal IoT application.