By Paula Doe, SEMI
With artificial intelligence (AI) rapidly evolving, look for applications like voice recognition and image recognition to get more efficient, more affordable, and far more common in a variety of products over the next few years. This growth in applications will drive demand for new architectures that deliver the higher performance and lower power consumption required for widespread AI adoption.
“The challenge for AI at the edge is to optimize the whole system-on-a-chip architecture and its components, all the way to semiconductor technology IP blocks, to process complex AI workloads quickly and at low power,” says Qualcomm Technologies Senior Director of Engineering Evgeni Gousev, who will provide an update on the progress of AI at the edge in a Data and AI program at SEMICON West, July 10-12 in San Francisco.
Qualcomm Snapdragon 845 uses heterogeneous computing across the CPU, GPU, and DSP for power-efficient processing for constantly evolving AI models. Source: Qualcomm
A system approach that optimizes across hardware, software, and algorithms is necessary to deliver the ultra-low power – to a sub 1-milliwatt level, low enough to enable always-on machine vision processing – for the usually energy-intensive AI computing. From the chip architecture perspective, processing AI workloads with the most appropriate engine, such as the CPU, GPU, and DSP with dedicated hardware acceleration, provides the best power efficiency – and flexibility for dealing with rapidly changing AI models and growing diversity of applications.
“But we’re going to run out of brute force options, so the future opportunity is more innovations with new architectures, dedicated hardware, new algorithms, and new software.” – Evgeni Gousev, Qualcomm Technologies
“So far it’s been largely a brute force approach using conventional architectures and cloud-based infrastructure,” says Evgeni. “But we’re going to run out of brute force options, so future opportunities lie in developing innovative architectures, dedicated hardware, new algorithms, and new software. Innovation will be especially important for AI at the edge and applications requiring always-on functionality. Training is mostly in the cloud now, but in the near future it will start migrating to the device as the algorithms and hardware improve. AI at the edge will also remove some privacy concerns, an increasingly important issue for data collection and management.”
Practical AI applications at the edge where resources are constrained run the gamut, spanning smartphones, drones, autonomous vehicles, virtual reality, augmented reality and smart home solutions such as connected cameras. “More AI on the edge will create a huge opportunity for the whole ecosystem – chip designers, semiconductor and device manufacturers, applications developers, and data and service providers. And it’s going to make a significant impact on the way we work, live, and interact with the world around us,” Evgeni said.
Future generations of chips may need more disruptive systems-level change to handle high data volumes with low power
A next-generation solution for handling the massive proliferation of AI data could be a nanotechnology system, such as the collaborative N3XT (Nano-Engineered Computing Systems Technology) project, led by H.S. Philip Wong and Subhasish Mitra at Stanford. “Even with next-generation scaling of transistors and new memory chips, the bottlenecks in moving data in and out of memory for processing will remain,” says Mitra, another speaker in the SEMICON West program. “The true benefits of nanotechnology will only come from new architectures enabled by nanosystems. One thing we are certain of is that massively more capable and more energy-efficient systems will be necessary for almost any future application, so we will need to think about system-level improvements.”
Major improvement in handling high volumes of data with low high energy use will require system-level improvements, such as monolithic 3D integration of carbon nanotube transistors in the multi-campus N3XT chip research effort. Source: Stanford University
That means carbon nanotube transistors for logic, high density non-volatile MRAM and ReRAM for memory, fine-grained monolithic 3D for integration, new architectures for computation immersed in memory, and new materials for heat removal. “The N3XT approach is key for the 1000X energy efficiency needed,” says Mitra.
“One thing we are certain of is that massively more capable and more energy efficient systems will be necessary for almost any future application, so we will need to think about system-level improvements.” – Subhasish Mitra, Stanford University
Researchers have demonstrated improvements in all these areas, including multiple hardware nanosystem prototypes targeting AI applications. The researchers have transferred multiple layers of as-grown carbon nanotubes to the target wafer to significantly improve CNT density. They have developed a low-power TiN/HfOx/Pt ReRAM whose low-temperature CNT and ReRAM processes enable multiple vertical layers to be grown on top of one another for ultra-dense and fine-grained monolithic 3D integration.
Other speakers at the Data and AI TechXpot include Fram Akiki, VP Electronics, Siemens; Hariharan Ananthanarayanan, motion planning engineer, Osaro; and David Haynes, Sr. director, strategic marketing, Lam Research. See SEMICONWest.org.