Insights From Leading Edge



IFTLE 376 ASE / TDK launch ASE Embedded; The AI Ecosystem Develops

By Dr. Phil Garrou, Contributing Editor

ASE, TDK Embedded Chip Joint Venture begins

Taiwan’s ASE has initiated a joint venture with Japan’s TDK Corp to produce embedded packaging solutions in Kaohsiung Taiwan. ASE has 51% ownership in the venture which currently employs 150 people.

With initial capital of $51MM , ASE Embedded Electronics Inc has started operations manufacturing embedded substrates using TDK’s (SESUB) technology (see IFTLE 238 “ASE & the Apple watch, ASE / TDK JV…” and IFTLE 347: “ASE Embedded Packaging Solutions” )

ASE 1-4

AI vs IoT

AI and IoT both buzz words that are predicted to drive the electronics industry over the next decade. ITLE is bullish on AI, not so much on IoT. As I have explained before my opinion is formed from a packaging perspective and while I think AI will need all the latest high end packaging solutions, I still perceive that most IoT will require the absolute lowest cost, stripped down packaging available. AI platforms are an intimate combination of hardware and software but certainly will be requiring the latest that we have to offer in high end packaging solutions.

AI processing will go into home control devices, autos, surveillance systems, airplanes, wearables and things we have yet to think of. AI processing is unique in that traditional customers Amazon, Google and Apple, have begun to design their own AI chips, in hopes of differentiating their products from those of rivals. This has major ramifications for companies like Intel and Nvidia, which will now be competing with their customers.

While I certainly am not an AI expert, we all must quickly up our knowledge in this area which I see leading advanced packaging into the next decade. A list of current participants has recently been compiled (shown in the table below) (link)

table

Recently on “Graphics Speaks” Kathleen Maher has looked at how some of these cloud companies, IP companies, and the traditional semiconductor companies all have conflicting ambitions in the AI market place . I recommend reading the full article.[link]

While cloud companies like Google appear to be favoring custom chips to augment CPUs and GPUs, Semiconductor and IP companies are designing chips to enable efficient hardware and neural net systems and Intel is proposing an open platform ecosystem based on Xeon, FPGAs, and specialized processors like Nervana and Saffron.

Google

Google’s Tensor Processing Unit (TPU), was introduced last year. Their initial beta customer Lyft, is using AI to recognize surroundings, locations, street signs etc. The cloud-based TPU features 180 teraflops of floating-point performance through four ASICS with 64 GB of high bandwidth memory. These modules can be used alone or connected together via a dedicated network to form multi-petaflop ML supercomputers that they call TPU pods.

Apple

The best known mobile AI processor is included in the Apple iPhone X. Apple’s A11 is a 64-bit ARM 6 core CPU with two high performance 2.39 GHz cores called Monsoon, and four energy efficient cores, call Mistral. The A11’s performance controller gives the chip access to all six cores simultaneously. The A11 has three-core GPU by Apple, the M11 motion coprocessor, an image processor supporting computational photography, and the new Neural Engine that comes into play for Face ID and other machine learning tasks.

Amazon

Amazon is reportedly developing a chip designed for artificial intelligence to work with the Echo and other hardware powered by Amazon’s Alexa virtual assistant (link). The chip should allow Alexa-powered devices to respond more quickly to commands, by allowing more data processing to be handled on the device vs the cloud.

Nvidia

Nvidia has announced its new Volta GPU with 640 tensor cores, which delivers over 100 Teraflops. It has been adopted by leading cloud suppliers including Amazon, Microsoft, Google, Oracle , and others. On the OEM side, Dell EMC, HP, Huawei, IBM and Lenovo have all announced Volta-based offerings for their customers.

Microsoft and Intel’s “Brainwave”

Microsoft has teamed with Intel and is offering their Stratix 10 FPGAS for AI processing on Microsoft Azure (see below) codename “Brainwave” (link) . Intel is proposing FPGAs  + processors for AI work. Intel is reportedly focusing on the Stratix X FPGA as a AI companion to Intel’s Xeon processors.

Microsoft

 

Intel’s 14 nm Stratix 10 FPGAs accelerate Microsoft’s Azure deep learning platform using FPGAs with “soft” Deep Neural Network (DNN) units synthesized onto the FPGAs instead of hardwired Processing Units (DPUs). Brainwave is designed for live data streams including video, sensor feeds, and search queries.

Intel

Intel is making a major play in AI. Intel has multiple processor options for AI, including Xeon, FPGAs, Nervana, Movidius, and Saffron (link).

Saffron Technology was acquired by Intel in 2015. It develops “..cognitive computing systems that use incremental learning to understand and unify by entity (person, place or thing) the connections between an entity and other “things” in data, along with the context of their connections and their raw frequency counts…. Saffron learns from all sources of data including structured and unstructured data to support knowledge-based decision making.” It is being used extensively in the financial services industry.

In 2016, Inte­­­l announced acquired Nervana, a startup developing AI software and hardware for machine learning. In 2017, Intel revealed the Nervana Neural Network Processor (NNP) designed expressly for AI and deep learning.

Intel acquired Movidius in 2016 to get VPU (visual processor unit) technology for machine learning and AI. Intel’s Movidius devices include dedicated imaging, computer vision processing, and an integrated neural compute engine. Applications for VPUs include automobile license readers at bridges and toll roads, airport security screening, drone surveillance and the many applications of facial recognition.

ARM – Trillium Platform

The Arm Trillium Platform includes Machine Learning (ML) and Object Detection (OD) processors with Arm software, and the existing Arm compute library and CMSIS-NN Neural Network kernels.

 

It will be interesting to see hope the packaging community develops solutions that will be compatible with these advanced high speed HPC applications.

For all the latest on advanced packaging, stay linked to IFTLE…

POST A COMMENT

Easily post a comment below using your Linkedin, Twitter, Google or Facebook account. Comments won't automatically be posted to your social media accounts unless you select to share.

Leave a Reply

Your email address will not be published. Required fields are marked *