Huge growth in cloud memory changes semiconductor supply chain

By Paula Doe, SEMI

The explosive growth in demand for internet bandwidth and cloud computing capacity brings a new set of technology challenges and opportunities for the semiconductor supply chain. “Azure grew by 2X last year, but we can’t pull more performance out of the existing architecture,” noted Kushagra Vaid, Microsoft’s GM Hardware Engineering, Cloud & Enterprise, at last week’s Linley Cloud Hardware Conference in Santa Clara, Calif.  “We are at a junction point where we have to evolve the architecture of the last 20-30 years.” He stressed that the traditional way of designing chips and systems to optimize for particular workloads isn’t working anymore. “We can’t design for a workload so huge and diverse. It’s not clear what part of it runs on any one machine,” he noted. “How do you know what to optimize? Past benchmarks are completely irrelevant.”

Explosive growth in demand for data storage and processing in the cloud means change across the chip world. Source: Cisco VNI Global IP Traffic Forecast

Explosive growth in demand for data storage and processing in the cloud means change across the chip world. Source: Cisco VNI Global IP Traffic Forecast

Roadmap accelerates for networking chips 

Look for accelerating change in the networking chip market. Now that merchant chip suppliers have taken over 75 percent of the networking chip market from the proprietary suppliers, intense competition has meant astonishing improvements in reducing size and power, and two-year technology cycles, reported keynote speaker Andreas Bechtolsheim, Arista Networks Chief Development Officer and Chairman.  “The cloud is accelerating transitions, as the big data centers demand low cost,” he noted, explaining that new technologies no longer see gradual adoption through different applications. They have to start out cheaper to get any traction at all, but then ramp sharply to high volume in six months as high-volume data centers convert.

Data center networks expect transition to 400G to start in 2018. Source: MACOM

Data center networks expect transition to 400G to start in 2018. Source: MACOM

Bechtolsheim said the majority of the network link market will convert from 40G to 100G this year, and to 400G in 2019.  For 800G two years later, chip design will have to start this year. Luckily there’s a clear path for scaling on the chip side, from the current generation’s 28nm technology down to 16nm and 7nm.  But it could be a push for some of the ecosystem. “It’s pushing the packaging vendors, as 1.0mm solder balls are about the limit,” said Bechtolsheim. Companies are also forming a group to speed the standards process by making the 800G standard simply 2X that for 400G, as the 400B standard took eight years.

The 40G chips at the server layer are moving to pulse amplitude modulation (PAM4) to send and receive four signals at once, which will require moving to digital signal processing. Moving from analog bipolar to digital CMOS technology also enables significant scaling of chip size and power, with significant reduction in die area (~50 percent) and power (~40 percent) with 16nm FinFET compared to 28nm, noted MACOM’s Chris Collins, director of Marketing. The company plans 7nm 800G devices next year.

New layers and new types of memory

One likely change is new types and new placement for memory, for higher speeds, different levels of non-volatile cache, and designs and accelerator subsystems that limit the need to move large amounts of data back and forth over limited pipelines. “Data is doubling every 2-2.5 years, but DRAM bandwidth is only doubling every 5 years. It’s not keeping up,” noted Steven Woo, Rambus VP, Systems and Solutions. “We’ll see the addition of more tiers of memory over the next few years.” He suggested the emerging challenge would be what data to place where, using what technology, and how to move memory in general closer to the processing. Racks may become the basic unit instead of servers, so each can be optimized with more memory or more processors as needed.

Handling big data in the cloud means more opportunity for new memory technologies in an emerging tier between DRAM and solid state drives. Source: Rambus

Handling big data in the cloud means more opportunity for new memory technologies in an emerging tier between DRAM and solid state drives. Source: Rambus

Specialized accelerators speed particular applications

Another emerging solution is specialized chips or subsystem boards to accelerate particular types of cloud processing by taking over some jobs from the CPU cores, typically with different types of processors and lots of localized memory. Google and Wave Computing have their accelerator chips optimized for neural network processing. Mellanox offers offload adopter cards based on ASICs, FPGAs or RISC, with increasingly complex functions, claiming the potential to offload as much as of 80 percent of the overhead function of the CPU, to get a 2.7X increase in throughput per server.  MoSys proposes replacing conventional content addressable memory with a programmable search engine, based on an FPGA, a lot of SRAM, and software to search and route with different strategies for different types of applications to significantly increase speeds. Chelsio offers a module to handle encryption and decryption off the CPU without having to shuttle information back and forth to memory. Amazon even is renting FPGAs in its cloud so users can design their own accelerators for their particular workloads. But Microsoft’s Vaid remained skeptical that a proliferation of solutions for particular applications would be the best approach for the general use in the cloud.

300mm production and passive fiber alignment improve silicon photonics

Silicon photonics technology continues to make progress, and may find application in the market for very high bandwidth, mid to long haul transmission (30 meters to 80 kilometer), where spectral efficiency is the key driver, suggested Ted Letavic, Global Foundries, Senior Fellow. “4.5 and 5G communications will use photonics solutions similar to those needed in the data center, for volume that will drive down cost,” he noted. The foundry has now transferred its monolithic process to 300mm wafers, where the immersion lithography enables better overlay and line edge roughness, to reduce losses by 3X.  The company has an automated, passive solution to attach the optical fiber to the edge of the chip, pushing ribbons of multiple fibers into MEMS groves in the chip with an automated pick and place tool.  Letavic said the edge coupling process was in production for a telecommunications application.

Array of optical fibers are passively aligned by sliding into MEMS grooves at the side of the chip for 100Gpbs x 12 = 1.2Tb interconnect in flat form factor. Source: Global Foundries

Array of optical fibers are passively aligned by sliding into MEMS grooves at the side of the chip for 100Gpbs x 12 = 1.2Tb interconnect in flat form factor. Source: Global Foundries

For more information about SEMI, visit www.semi.org. SEMI also offers many events covering electronics manufacturing supply chain issues; for a full list of SEMI events, visit www.semi.org/en/events. SEMI is on LinkedIn and Twitter.

POST A COMMENT

Easily post a comment below using your Linkedin, Twitter, Google or Facebook account. Comments won't automatically be posted to your social media accounts unless you select to share.