Tag Archives: AI

AI needs memory

By Pete Singer, Editor-in-Chief

Artificial intelligence, which is extremely useful for analyzing large amounts of data (think image processing and natural language recognition), is already impacting every aspect of our lives. Products being made today are being redesigned to accommodate some form of intelligence that it can adapt to the preferences of the user. Smart speakers integrating Alexa or Siri are perhaps the best examples in the home and office, but there’s huge value in AI for businesses. “AI is so fundamental to improving what we expect of devices and their ability to interpret our needs and even predict our needs, that’s something that we’re going to see more and more of in the consumer space. And then of course in the industrial environments as well,” notes Colm Lysaght, vice president of corporate strategy at Micron Technology. “Many different industries are working and using machines and algorithms to learn and adapt and do things that were not possible before.”

There are various ways to crunch this data. CPUs work very well for structured floating point data, while GPUs work well for AI applications – but that doesn’t mean people aren’t using traditional CPUs for AI. In fact, AI is being implemented today with a mix of CPUs, GPUs, ASICs and FPGAs. Data crunching also needs a lot of memory and storage.

A new report by Forrester Consulting, commissioned by Micron, takes a look at how companies are implementing AI and the hardware they are using, with a special focus on memory and storage.

Forrester conducted an online survey and three additional interviews with 200 IT and business professionals that manage architecture, systems, or strategy for complex data at large enterprises in the US and China to further explore this topic. Here are their key findings:

  • AI/ML will continue to exist in public and private clouds. Early modeling and training on public data is occurring in public clouds, while production at scale and/or on proprietary data will often be in a private cloud or hybrid cloud to control security and costs.
  • Memory and storage are the most common challenge in building AI/ML training hardware. While the CPU/GPU/custom compute discussion received great attention, memory and storage are turning out to be the most common challenge in real world deployments and will be the next frontier in AI/ML hardware and software innovation.
  • Memory and storage are critical to AI development. Whether focusing on GPU or CPU, storage and memory are critical in today’s training environments and tomorrow’s inference.

“AI is having a very large impact on society and it is fundamentally rooted in our technology. Many different applications, all of which are interpreting data in real time, need fast storage and they need memory,” Lysaght said. “At Micron, we’re transforming the way the world uses information to enrich our lives.“

To get to the next level in performance/Watt, innovations being researched at the AI chip level include:    low precision computing, analog computing and resistive computing. This will require some new innovation in design, manufacturing and test. That’s the focus of The ConFab, to be held May 14-17 at The Cosmopolitan of Las Vegas (see www.theconfab.com for more information).

IBM’s Jeff Welser to Keynote The ConFab 2019

AI was a big focus on The ConFab and 2018 and we will continue that theme in 2019 with a keynote talk by IBM’s Jeff Welser.
The ConFab 2019 will return to The Cosmopolitan of Las Vegas on May 14-17. In 2018, AI and other leading technologies were discussed by speakers from IBM, Google, Nvidia, HERE Technologies, Silicon Catalyst, TechInsights, Siemens and Qorvo, among many others.

AI, which represents a market opportunity $2 trillion on top of the existing $1.5-2B information technology industry, is a huge game changer for the semiconductor industry. In addition to AI chips from traditional IC companies such as Intel, IBM and Qualcomm, more than 45 start-ups are working to develop new AI chips, with VC investments of more than $1.5B. Tech giants such as Google, Facebook, Microsoft,
Amazon, Baidu and Alibaba are also developing AI chips.

As Vice President and Lab Director at IBM Research – Almaden, Dr. Welser oversees exploratory and applied research. Home of the relational database and the world’s first hard disk drive, Almaden today continues its legacy of advancing data technology and analytics for Cloud and AI systems and software, and is increasingly focused on advanced computing technologies for AI, neuromorphic devices and quantum computing. After joining IBM Research in 1995, Dr. Welser has worked on a broad range of technologies, including novel silicon devices, high performance
CMOS and SOI device design, and next generation system components. He has directed teams in both development
and research as well as running industrial, academic and government consortiums, including the SRI Nanoelectronics
Research Initiative.

Dr. Welser will describe how making AI semiconductor engines will require a wildly innovative range of new materials, equipment, and design methodologies. To get to the next level in performance/Watt, innovations being researched at the AI chip level – at IBM and elsewhere — include:
low precision computing, analog computing and resistive computing.

Additional industry experts adding to The ConFab 2019 Agenda will be announced soon.

About The ConFab
The ConFab, now in its 15th year, is the premier semiconductor manufacturing and design conference and networking event that brings notable industry leaders together to connect and collaborate. For more information, visit www.theconfab.com. To inquire about participating, if you represent an equipment, material or service supplier, contact Kerry Hoffman, Director of Sales at [email protected]; contact Sally Bixby at [email protected] about attending as a guest.

AI Focus of The ConFab

Artificial Intelligence will be a focus of The ConFab 2018, to be held May 20-23 at The Cosmopolitan of Las Vegas. We’ll hear from a variety of speakers on why A.I. is so important to the semiconductor industry, not only in terms of the new types of chips that will be required, but how A.I. will bring dramatic improvements to the semiconductor manufacturing process.

“The exciting results of AI have been fueled by the exponential growth in data, the widespread availability of increased compute power, and advances in algorithms,” notes Rama Divakaruni of IBM, our keynote speaker. “Continued progress in AI – now in its infancy – will require major innovation across the computing stack, dramatically affecting logic, memory, storage, and communication.”

Rama will explain how the influence of AI is already apparent at the system-level by trends such as heterogeneous processing with GPUs and accelerators, and memories with very high bandwidth connectivity to the processor. The next stages will involve elements which exploit characteristics that benefit AI workloads, such as reduced precision and in-memory computation. Further in time, analog devices that can combine memory and computation, and thus minimize the latency and energy expenditure of data movement, offer the promise of orders of magnitude power-performance improvements for AI workloads.

John Hu, Director of Advanced Technology, Nvidia Corporation will also address AI in a talk titled “The Era of Deep Learning IC Industry Driven by AI, Autonomous Driving and Virtual Reality.” Hu notes that the “big bang” of AI and autonomous driving has driven the IC industry into a new era of rapid growth and innovation. In his talk, Hu will describe how the next 1000 times of improvement requires a new paradigm shift in the collaboration and co-optimizations across the whole industry; from materials, process technologies, design and chip/system platform. In this era that machine(s) can improve themselves by deep learning, hear how the semiconductor industry also needs to have the capability of deep learning for innovation, to stay ahead in the changing competitive landscape.

“Artificial intelligence has brought human beings to a point in history, for our industry and the world in general, that is more revolutionary than a small, evolutionary step,” says Howard Witham, Vice President of Texas Operations at Qorvo, who will speak on the potential of AI in the semiconductor fab.  Howard will describe how AI provides predictive maintenance, auto defect and wafer map classification, outlier detection, automated recipe setups based on device requirements and upstream data, and dynamic interpolation and guard-banding.

Please join us for these and other insightful talks, including one from Google’s John Martinis on quantum computing. Visit www.theconfab.com for more information.