Graphcore leverages Mentor DFT solutions to speed time to market for innovative AI acceleration chip

Mentor, a Siemens business, today announced that artificial intelligence (AI) semiconductor innovator Graphcore (Bristol, U.K.) successfully met its silicon test requirements and achieved rapid test bring-up on its Colossus Intelligence Processing Unit (IPU) by using Mentor’s Tessentâ„¢ product family.

Graphcore’s recently announced Colossus IPU targets machine intelligence training and inference in datacenters. The first-of-its-kind device lowers the cost of accelerating AI applications in cloud and enterprise datacenters, while increasing the performance of both training and inference by up to 100x compared to the fastest systems today.

Graphcore required a DFT solution that could reduce the cost and time challenges associated with testing the Colossus IPU’s novel architecture and exceptionally large design. Integrating 23.6 billion transistors and more than a thousand IPU cores, Colossus is one of the largest processors ever fabricated.

Mentor’s Tessent is the market leading DFT solution, helping companies achieve higher test quality, lower test cost and faster yield ramps. The register-transfer level (RTL)-based hierarchical DFT foundation in Tessent features an array of technologies specifically suited to address the implementation and pattern generation challenges of AI chip architectures.

Graphcore leveraged these capabilities and the Tessent SiliconInsight integrated silicon bring-up environment on Graphcore’s Colossus IPU to meet its test requirements, while minimizing cycle time for DFT implementation, pattern generation, verification and silicon validation.

“We used Mentor’s fully automated Tessent platform for our series of initial silicon parts, together with an all-Mentor DFT flow, allowing us to ship fully tested and validated parts within the first week,” said Phil Horsfield, vice president of Silicon at Graphcore. “We were able to have Logic BIST, ATPG and Memory BIST up and running in under three days. This was way ahead of schedule.”

Research firm IBS, Inc. estimates that AI-related applications consumed $65 billion (USD) of processing technology last year, growing at an 11.5 percent annual rate and significantly outpacing other segments. This processing demand has until now been supplied by microprocessors not fully optimized for high AI workloads. To meet this growing demand while significantly lowering computational cost, more than 70 companies have announced plans to create new processing architectures based on massive parallelism and specialized for AI workloads.

“Hardware acceleration for AI is now a very competitive and rapidly evolving market. As a result, fast time to market is a leading concern for this segment,” said Brady Benware, senior marketing director for the Tessent product family at Mentor, a Siemens business. “Companies participating in this market are choosing Tessent because its RTL-based hierarchical DFT approach provides extremely efficient test implementation for massively parallel architectures, and Tessent’s SiliconInsight debug and characterization capabilities eliminate costly delays during silicon bring-up.”

POST A COMMENT

Easily post a comment below using your Linkedin, Twitter, Google or Facebook account. Comments won't automatically be posted to your social media accounts unless you select to share.