Tag Archives: letter-ap-tech

By Serena Brischetto

SEMI met with Jay Zhang, business development director at Corning Incorporated, to discuss recent innovations at Corning that allow fine granularity CTE engineering as well as high Young’s modulus. We also talked about the impact of this work on in-process warp control, as well as the associated production methodology that provides rapid prototyping and high-volume manufacturing. We spoke ahead of his presentation at the 3D & Systems Summit, 28-30 January, 2019, in Dresden, Germany. To register for the event, please click here.

SEMI: What is Corning’s mission and vision and your role within the company?

Zhang: Corning is one of the world’s leading innovators in materials science with a track record of 165+ years of life-changing innovations. We excel in glass science, ceramics science, and optical physics and succeed through sustained investment in RD&E. Our products include Corning® Gorilla® glass, a durable material used on more than six billion mobile devices worldwide, and industry-leading LCD glass for display applications.

We have recently dedicated a unit of the company called Precision Glass Solutions to address the emerging need for glass in the semiconductor industry. Here we apply Corning’s long history of glass science expertise and deep customer relationships in consumer electronics to support cutting-edge applications like wafer-level optics for precise 3D sensing and carrier solutions for temporary bonding applications in semiconductor manufacturing. It’s our most recent work in the Carrier Solutions product line that I’m excited to present: a new carrier glass product optimized for fan-out, called Corning Advanced Packaging Carriers.

SEMI: What projects are you currently working on that you think will make a difference in 2019?

Zhang: My team is excited to introduce Corning Advanced Packaging Carriers this year. This is a new line of product within our portfolio of Carrier Solutions. These ultra-flat glass carriers are specially developed to reduce customers’ challenge of in-process warp by up to 40 percent, which in turn helps advanced packaging customers achieve better yield.

Corning Advanced Packaging Carriers feature high-stiffness properties and are available in a wide range of coefficients of thermal expansion (CTE) in fine granularity. These attributes help customers select an ideal glass carrier that will minimize in-process warp for their package. Furthermore, we make sample quantities of these carriers available in just four to six weeks to help maximize efficiency during customers’ R&D process.

My team is excited about the potential of this new product, but also encouraged by our results. We have already supplied this product and have heard from one of the largest semiconductor companies in Taiwan that it has reduced in-process warp by as much as 150μm.

SEMI: Your presentation at the 3D & Systems Summit will focus on Agile Manufacturing of Glass Carriers for Advanced Packaging. What exactly will you be sharing?

Zhang: There is a lot of interest right now in using glass as a carrier substrate in temporary bonding applications in advanced semiconductor packaging – especially in fan-out processes. We also know that in-process warp is a significant challenge to companies pursuing advanced packaging because different CTE materials are added during the process.

My team has done a lot of work to understand the impact that an ideal CTE glass carrier substrate can have on minimizing in-process warp. We have studied the available levers – both theoretical and in real-life fab environments – that can help address this challenge. I will present our findings on how it is possible to select a glass carrier with the ideal CTE and Young’s modulus to reduce in-process warp by up to 40 percent, and how Corning has developed an agile manufacturing platform to support customers with these ideal carriers from their R&D stage through mass production.

SEMI: What do you think will be a hot topic in the next few years?

Zhang: We expect high-end fanout technology to address more applications beyond just mobile APs. There is also an interesting dynamic playing out between wafer-level and panel-level fan-out technologies. Corning is active in both areas. In developing and offering high performance glass carriers, we hope to help enable our customers to expand the fan-out applications space.

SEMI: What are your expectations regarding the summit in Dresden, and why do you recommend your members and other industry leaders to attend the 2019 3D & Systems Summit?

Zhang: Europe is where some of the most advanced packaging technologies are born. Fan-out also saw early commercialization there. I hope to meet many scientists and technologists at 3D & Systems Summit and exchange technical and business ideas. We also hope to get early feedback from other attendees about the value of our new product offering.

Serena Brischetto is a marketing and communications manager at SEMI Europe.

This originally appeared on the SEMI blog.

Corning Incorporated (NYSE: GLW) today introduced its latest breakthrough in glass substrates for the semiconductor industry – Advanced Packaging Carriers. This enhanced line of glass carrier wafers is optimized for fan-out processes, a type of cutting-edge semiconductor packaging that enables smaller, faster chips for consumer electronics, automobiles, and other connected devices.

Corning Advanced Packaging Carriers feature three significant improvements:

– Fine granularity in a wide range of available coefficients of thermal expansion (CTE)
– High stiffness composition
– Rapid sampling availability

These attributes are important for customers pursuing fan-out packaging because:

– Fine granularity enables customers to more easily select the optimal CTE needed to minimize in-process warp. Precise CTE offerings thereby help reduce customers’ development cycle time.
– Corning’s high stiffness compositions help further reduce in-process warp. Minimizing warp helps maximize their yield of packaged chips.
– Rapid sampling availability also contributes to reduced development time and enables customers to move to the mass production phase more quickly.

“We created Corning Advanced Packaging Carriers especially for our customers pursuing the most challenging types of chip manufacturing processes,” said Rustom Desai, commercial director of Corning Precision Glass Solutions.

“Our deep technical ties in the semiconductor industry, combined with Corning’s core competencies in glass science and manufacturing, enabled us to create an innovative product that can help customers maximize efficiency throughout their development process and mass production ramp,” Desai said.

Corning’s semiconductor glass carriers are one of several products in Corning’s portfolio of Precision Glass Solutions designed to address the emerging need for glass across microelectronics. This portfolio provides customers with a one-stop shop for world-class capabilities including proprietary glass and ceramic manufacturing platforms, finishing processes, bonding technologies, best-in-class metrology, automated laser glass-processing, and optical design expertise.

By Emmy Yi

SEMI Taiwan Testing Committee founded to strengthen the last line of defense to ensure the reliability of advanced semiconductor applications.

Mobile, high-performance computing (HPC), automotive, and IoT – the four future growth drivers of semiconductor industry, plus the additional boost from artificial intelligence (AI) and 5G – will spur exponential demand for multi-function and high-performance chips. Today, a 3D IC semiconductor structure is beginning to integrate multiple chips to extend functionality and performance, making heterogeneous integration an irreversible trend.

As the number of chips integrated in a single package increases, the structural complexity also rises. Not only will this make identifying chip defects harder, but the compatibility and interconnection between components will also introduce uncertainties that can undermine the reliability of the final ICs. Add to these challenges the need for tight cost control and a faster time to market, and it’s clear that semiconductor testing requires disruptive, innovative change. Traditional final-product testing focusing on finished components is now giving way to wafer- and system-level testing.

In addition, the traditional notion of design for testing, an approach that enhances testing controllability and observability, is now coupled with the imperative to test for design, which emphasizes drawing analytics insights from collected test data to help reduce design errors and shorten development cycles. Going forward, the relationship among design, manufacturing, packaging, and testing will no longer be un-directional. Instead, it will be a cycle of continuous improvement.

This paradigm shift in semiconductor testing, however, will also create a need for new industry standards and regulations, elevate visibility and security levels for shared data, require the optimization of testing time and costs, and lead to a shortage of testing professionals. Solving all these issues will require a joint effort by the industry and academia.

“With leading technologies and $4.7 billion in market value, Taiwan still holds the top spot in global semiconductor testing market,” said Terry Tsao, President of SEMI Taiwan. “When testing extends beyond the manufacturing process, it can play a critical role in ensuring quality throughout the entire life cycle from design and manufacturing to system integration while maintaining effective controls on development costs and schedules. Taiwan’s semiconductor industry is in dire need of a common testing platform to enable the cross-disciplinary collaboration necessary for technical breakthroughs.”

The new SEMI Taiwan Testing Committee was formed to meet that need, gathering testing experts and academics from MediaTek, Intel, NXP Semiconductors, TSMC, UMC, ASE Technology, SPIL, KYEC, Teradyne, Advantest, FormFactor, MJC, Synopsys, Cadence, Mentor, and National Tsing Hua University to collaborate in building a complete testing ecosystem. The committee addresses common technical challenges faced by the industry and cultivates next-generation testing professionals to enable Taiwan to maintain its global leadership in semiconductor testing.

The SEMI Taiwan Testing Platform spans communities, expositions, programs, events, networking, business matching, advocacy, and market and technology insights. For more information about the SEMI Taiwan Testing platform, please contact Elaine Lee ([email protected]) or Ana Li ([email protected]).

Emmy Yi is a marketing specialist at SEMI Taiwan.  

This story originally appeared on the SEMI blog.

Rudolph Technologies, Inc. (NYSE: RTEC) announced today that it has received orders for 12 of its Dragonfly™ G2 system, just months after releasing the product. Several systems were delivered in the fourth quarter to the largest OSAT where the Dragonfly G2 systems displaced incumbent 3D technology and retained the Company’s market leadership in 2D macro inspection. The remaining systems will ship in the first half of 2019 to OSAT, IDM, and Foundry customers who are adopting the Dragonfly G2 platform for its high productivity in two-dimensional (2D) inspection, and its accuracy and repeatability in three-dimensional (3D) inspection of the smallest copper pillars. The Company expects additional adoptions of the Dragonfly G2 system across multiple key market segments in the first half of 2019, which validates Rudolph’s collaborative R&D approach with its key customers.

The new Dragonfly G2 platform delivers up to 150% improvement in productivity over legacy systems as well as exceeds competitive system throughputs. Its modular architecture provides a flexible platform with plug-and-play configurability to combine 2D with 3D Truebump™ Technology for accurate copper pillar/bump height measurements. Clearfind™ Technology detects non-visual residue defects and advanced sensor technology measures 3D features and CD metrology. Additionally, the Dragonfly G2 platform has been specifically architected to allow the measurement, data collection, and analysis of bump interconnects nearing 100 million bumps per wafer using Rudolph’s Discover® software and advanced computing architecture.

“We are pleased that our leading-edge customers across multiple market segments are quickly recognizing the value of the Dragonfly G2 system,” said Michael Plisinski, chief executive officer at Rudolph. “Today’s interconnects for advanced memory are now at or below five microns, which require higher accuracy and repeatability versus standard copper pillar bumps. With approximately 65 wafer-level packages in today’s high-end smartphones, a single weak interconnect or reliability failure can result in a high cost of return, driving our customers’ need for the enhanced process control performance. Defect sensitivity, resolution, and productivity are combined in the Dragonfly G2 system to deliver a capability and cost of ownership that is unparalleled in the competitive space.”

Micron Technology, Inc., (Nasdaq: MU) a developer of memory and storage solutions, today announced that its monolithic 12Gb low-power double data rate 4X (LPDDR4X) DRAM has been validated for use in MediaTek’s new Helio P90 smartphone platform reference design. Micron’s LPDDR4X is capable of delivering up to 12GB1 of low-power DRAM (LPDRAM) in a single smartphone device. By stacking up to eight die in a single package, it offers double the memory capacity without increasing the footprint compared to the previous generation product.

Use of enhanced mobile applications has accelerated consumer demand for compute and data-intensive attributes in handheld devices. This increase in demand has generated the need for high-value memory solutions that are capable of delivering the full potential of user features in next-generation smartphones. As the industry’s highest-capacity monolithic mobile memory, Micron’s LPDDR4X enables manufacturers of smartphones to deliver the benefits of high-resolution imaging, use of artificial intelligence (AI) for image optimization and multimedia features through its industry-leading bandwidth, capacity and power efficiency.

“Micron is committed to advancing the compute and data processing capabilities of smartphones and other edge devices, working with chipset vendors like MediaTek,” said Dr. Raj Talluri, senior vice president and general manager of the Mobile Business Unit at Micron Technology. “Our 12Gb monolithic LPDDR4X will unleash exciting new mobile applications in artificial intelligence and multimedia that will be further boosted by the availability of 5G.”

MediaTek’s Helio P90 smartphone chipset comes with the company’s most powerful AI technology to date — APU 2.0 — an innovative fusion AI architecture designed for powerful AI and gaming user experiences.

“MediaTek’s new Helio P90 smartphone platform delivers industry-leading performance for AI and imaging applications while maintaining power efficiency,” said Martin Lin, deputy general manager of MediaTek’s wireless communications business. “With its LPDDR4X, Micron supports our commitment to developing advanced technologies for smartphone platforms that enable richer mobile experiences.”

Micron LPDDR4X memory enables MediaTek to deliver the industry’s fastest LPDDR4 clock speeds and key improvements in power consumption to advance performance within mobile devices for next-generation applications. By achieving data rate speeds up to 4266 megabits per second (Mb/s) and delivering high density within a thin package, LPDDR4X is capable of meeting future needs of edge-AI data processing. High data rate speeds helps reduce data transaction workloads by performing machine learning on the device while still contributing to AI training in the cloud. As 5G mobile technology nears deployment, these capabilities will further enable more immersive and seamless experiences for mobile device users by supporting higher data rates and real-time data processing.

The new MediaTek Helio P90 smartphone chipset with Micron LPDDR4X technology will be incorporated into mobile devices and is expected to enter mass production in summer 2019.

At Intel “Architecture Day,” top executives, architects and fellows revealed next-generation technologies and discussed progress on a strategy to power an expanding universe of data-intensive workloads for PCs and other smart consumer devices, high-speed networks, ubiquitous artificial intelligence (AI), specialized cloud data centers and autonomous vehicles.

Intel demonstrated a range of 10nm-based systems in development for PCs, data centers and networking, and previewed other technologies targeted at an expanded range of workloads.

The company also shared its technical strategy focused on six engineering segments where significant investments and innovation are being pursued to drive leaps forward in technology and user experience. They include: advanced manufacturing processes and packaging; new architectures to speed-up specialized tasks like AI and graphics; super-fast memory; interconnects; embedded security features; and common software to unify and simplify programming for developers across Intel’s compute roadmap.

Together these technologies lay the foundation for a more diverse era of computing in an expanded addressable market opportunity of more than $300 billion by 2022.

Intel Architecture Day Highlights:

Industry-First 3D Stacking of Logic Chips: Intel demonstrated a new 3D packaging technology, called “Foveros,” which for the first time brings the benefits of 3D stacking to enable logic-on-logic integration.

Foveros paves the way for devices and systems combining high-performance, high-density and low-power silicon process technologies. Foveros is expected to extend die stacking beyond traditional passive interposers and stacked memory to high-performance logic, such as CPU, graphics and AI processors for the first time.

The technology provides tremendous flexibility as designers seek to “mix and match” technology IP blocks with various memory and I/O elements in new device form factors. It will allow products to be broken up into smaller “chiplets,” where I/O, SRAM and power delivery circuits can be fabricated in a base die and high-performance logic chiplets are stacked on top.

Intel expects to launch a range of products using Foveros beginning in the second half of 2019. The first Foveros product will combine a high-performance 10nm compute-stacked chiplet with a low-power 22FFL base die. It will enable the combination of world-class performance and power efficiency in a small form factor.

Foveros is the next leap forward following Intel’s breakthrough Embedded Multi-die Interconnect Bridge (EMIB) 2D packaging technology, introduced in 2018.

New Sunny Cove CPU Architecture: Intel introduced Sunny Cove, Intel’s next-generation CPU microarchitecture designed to increase performance per clock and power efficiency for general purpose computing tasks, and includes new features to accelerate special purpose computing tasks like AI and cryptography. Sunny Cove will be the basis for Intel’s next-generation server (Intel® Xeon®) and client (Intel® Core™) processors later next year.

Sunny Cove enables reduced latency and high throughput, as well as offers much greater parallelism that is expected to improve experiences from gaming to media to data-centric applications.

Next-Generation Graphics: Intel unveiled new Gen11 integrated graphics with 64 enhanced execution units, more than double previous Intel Gen9 graphics (24 EUs), designed to break the 1 TFLOPS barrier. The new integrated graphics will be delivered in 10nm-based processors beginning in 2019.

The new integrated graphics architecture is expected to double the computing performance-per-clock compared to Intel Gen9 graphics. With >1 TFLOPS performance capability, this architecture is designed to increase game playability. At the event, Intel showed Gen11 graphics nearly doubling the performance of a popular photo recognition application when compared to Intel’s Gen9 graphics. Gen11 graphics is expected to also feature an advanced media encoder and decoder, supporting 4K video streams and 8K content creation in constrained power envelopes. Gen11 will also feature Intel® Adaptive Sync technology enabling smooth frame rates for gaming.

Intel also reaffirmed its plan to introduce a discrete graphics processor by 2020.

“One API” Software: Intel announced the “One API” project to simplify the programming of diverse computing engines across CPU, GPU, FPGA, AI and other accelerators. The project includes a comprehensive and unified portfolio of developer tools for mapping software to the hardware that can best accelerate the code. A public project release is expected to be available in 2019.

Memory and Storage: Intel discussed updates on Intel® Optane™ technology and the products based upon that technology. Intel® Optane™ DC persistent memory is a new product that converges memory-like performance with the data persistence and large capacity of storage. The revolutionary technology brings more data closer to the CPU for faster processing of bigger data sets like those used in AI and large databases. Its large capacity and data persistence reduces the need to make time-consuming trips to storage, which can improve workload performance. Intel Optane DC persistent memory delivers cache line (64B) reads to the CPU. On average, the average idle read latency with Optane persistent memory is expected to be about 350 nanoseconds when applications direct the read operation to Optane persistent memory, or when the requested data is not cached in DRAM. For scale, an Optane DC SSD has an average idle read latency of about 10,000 nanoseconds (10 microseconds), a remarkable improvement.2 In cases where requested data is in DRAM, either cached by the CPU’s memory controller or directed by the application, memory sub-system responsiveness is expected to be identical to DRAM (<100 nanoseconds).

The company also showed how SSDs based on Intel’s 1 Terabit QLC NAND die move more bulk data from HDDs to SSDs, allowing faster access to that data.

The combination of Intel Optane SSDs with QLC NAND SSDs will enable lower latency access to data used most frequently. Taken together, these platform and memory advances complete the memory and storage hierarchy providing the right set of choices for systems and applications.

Deep Learning Reference Stack: Intel is releasing the Deep Learning Reference Stack, an integrated, highly-performant open source stack optimized for Intel® Xeon® Scalable platforms. This open source community release is part of our effort to ensure AI developers have easy access to all of the features and functionality of the Intel platforms. The Deep Learning Reference Stack is highly-tuned and built for cloud native environments. With this release, Intel is enabling developers to quickly prototype by reducing the complexity associated with integrating multiple software components, while still giving users the flexibility to customize their solutions.

Synopsys, Inc. (Nasdaq: SNPS) announced today another milestone in its longstanding partnership with imec, a research and innovation hub in nanoelectronics and digital technologies, with the successful completion of the first comprehensive sub-3 nanometer (nm) parasitic variation modeling and delay sensitivity study of complementary FET (CFET) architectures. With the potential to significantly reduce area versus traditional FinFETs, CFET is a promising option to maintain area scaling beyond 3nm technology.

In 3-nm and 2-nm process technologies, the magnitude of variation increases significantly for middle of line (MOL) parameters, as well as interconnect, due to high resistance of metal lines, vias, and surface scattering. Therefore, modeling parasitic variation and sensitivity is a critical factor in bringing CFET to mainstream production.

Prediction at early stages of process development will allow foundries to create more robust and variation-tolerant transistors, standard cells, and methodologies for metal interconnect. Using the QuickCap® NX 3D field solver, in a close collaboration between Synopsys R&D and imec research teams, allowed for fast and accurate modeling of parasitics for a variety of device architectures and to identify the most critical device dimensions and properties. This allowed the optimization of CFET devices for better power/performance trade-offs. As part of a comprehensive set of tools that includes Raphael™ TCAD extraction to StarRC™ parasitic extraction for the largest system-on-chips (SoCs), QuickCap NX effectively helps process engineers understand the sensitivity of circuit performance to variations in process parameters and improves modeling accuracy by establishing golden reference values.

“This work has allowed us to accurately model and analyze cell and interconnect variation at advanced processes and architectures, such as Complementary FET,” said Anda Mocuta, director, Technology Solutions and Enablement at imec. “Our collaboration with Synopsys continues a legacy of successful collaborations that enable us to search for technological breakthroughs below 3 nanometers. The capabilities of Synopsys tools, such as QuickCap NX, have been key to our joint research on variability.”

“Imec is at the forefront of research into semiconductor technology. Our collaboration with imec to develop variation-aware solutions down to 2 nanometer processes will benefit the entire semiconductor industry,” said Antun Domic, chief technology officer at Synopsys. “Utilizing the flexibility of Synopsys’ QuickCap NX 3D parasitic extraction interface, engineers can better target and significantly reduce the number of trials needed to optimize circuit performance in the presence of process variation and reduce circuit sensitivity. This significantly reduces the overall turnaround time for device and circuit optimization.”

Leti, a research institute at CEA Tech, has proven that RRAM-based ternary-content addressable memory (TCAM) circuits, featuring the most compact structure developed to date, can meet the performance and reliability requirements of multicore neuromorphic processors.

TCAM circuits provide a way to search large data sets using masks that indicate ranges. These circuits are, therefore, ideal for complex routing and big data applications, where an exact match is rarely necessary.  TCAM circuits allow searching for stored information by its content, as opposed to classic memory systems in which a memory cell’s stored information is retrieved by its physical address. They shorten the search time compared to classic memory-based search algorithms, as all the stored information is compared with the searched data in parallel, within a single clock cycle.

But conventional SRAM-based TCAM circuits are usually implemented with 16 CMOS transistors, which limits storage capacity of TCAMs to tens of Mbs in standard memory structures, and takes up valuable silicon real estate in neuromorphic computing spiking neural-network chips.

The breakthrough of the CEA-Leti project replaced SRAM cells with resistive-RAM (RRAM) in TCAM circuits to reduce the number of required transistors to two (2T), and to two RRAMs (2R), which is the most compact structure for these circuits produced to date. In addition, the RRAMs were fabricated on top of the transistors, which also consumed less area. This suggests such a 2T2R structure can decrease the required TCAM area by a factor of eight compared to the conventional 16-transistor TCAM structure.

But while using RRAMs in TCAM circuits significantly reduces both silicon chip area needed and power consumption, and guarantees similar search speed compared to CMOS-based TCAM circuits, this approach brings new challenges:

  • Circuit reliability is strongly dependent on the ratio between the ON and OFF states of the memory cells. RRAM-based TCAM reliability could be affected by the relatively low ON/OFF ratio (~10-100) with respect to the 16-transistor structure (~), and
  • RRAMs have a limited endurance with respect to CMOS transistors, which can affect the lifespan of the system.

Overcoming these challenges requires trade-offs:

  • The voltage applied during a search operation can be decreased, which improves system reliability. However, this also degrades system performance, e.g. slower searches, and
  • The limited endurance can be overcome by either decreasing the voltage applied during each search, or increasing the power used to program the TCAM cells beforehand. Both increase system endurance, while slowing searches.

The work, presented Dec. 4 at IEDM 2018 in a paper entitled, “In-depth Characterization of Resistive Memory-based Ternary Content Addressable Memories”, clarifies the link between RRAM electrical properties and TCAM performance with extensive characterizations of a fabricated RRAM-based circuit.

The research showed a trade-off exists between TCAM performance (search speed) and TCAM reliability (match/mismatch detection and search/read endurance). This provides insights into programming RRAM-based TCAM circuits for other applications, such as network packets routing.

“Assuming many future neuromorphic computing architectures will have thousands of cores, the non-volatility feature of the proposed TCAM circuits will provide an additional crucial benefit, since users will have to upload all the configuration bits only the first time the network is configured,” said Denys R.B. Ly, a Ph.D. student at Leti and lead author of the paper. “Users will also be able to skip this potentially time-consuming process every time the chip is reset or power-cycled.”

Leti, a research institute at CEA Tech, has reported breakthroughs in six 3D-sequential-integration process steps that previously were considered showstoppers in terms of manufacturability, reliability, performance or cost.

CoolCubeTM, CEA-Leti’s 3D monolithic or 3D sequential CMOS technology allows vertically stacking several layers of devices with a unique connecting-via density above tens of million/mm2. This MoreMoore technology decreases dice area by a factor of two, while providing a 26 percent gain in power. The wire-length reduction enabled by CoolCubeTM also improves yield and lowers costs. In addition to power savings, this true 3D integration opens diversification perspectives thanks to more integration of functions. From a performance optimization and manufacturing-enablement perspective, processing the top layer in a front end of line (FEOL) environment with a restricted thermal budget requires process modules optimization.

CEA-Leti’s recent 3D sequential integration results were presented Dec. 3 at IEDM 2018 in the paper, “Breakthroughs in 3D Sequential Integration”. The breakthroughs are:

  • Low-resistance poly-Si gate for the top field-effect transistors (FETs)
  • Full LT RSD (low temperature raised source and drain) epitaxy, including surface preparation
  • Stable bonding above ultra low-k (ULK)
  • Stability of intermediate back end of line (iBEOL) between tiers with standard ULK/Cu technology
  • Efficient contamination containment for wafers with Cu/ULK iBEOL, enabling their re-introduction in front end of line (FEOL) for top FET processing, and
  • Smart CutTM process above a CMOS wafer.

 

To obtain high-performance top FETs, low gate access resistance was achieved using UV nano-second laser recrystallization of in-situ doped amorphous silicon. Full 500°C selective silicon-epitaxy process was demonstrated with an advanced LT surface preparation and a combination of dry-and-wet etch preparation.  Epitaxial growth was demonstrated with the cyclic use of a new silicon precursor and dichlorine Cl2 etching. At the same time, the project paved the way to manufacturability of 3D sequential integration including iBEOL with standard ULK and Cu-metal lines.

A bevel-edge contamination containment strategy comprised of three steps (bevel etch, decontamination, encapsulation) enabled reintroducing wafers in an FEOL environment following the BEOL process. In addition, the project also demonstrated for the first time the stability of line-to-line breakdown voltage for interconnections submitted to 500°C. The work also demonstrated a Smart CutTM transfer of a crystalline silicon layer on a processed bottom level of FD-SOI CMOS devices, as an alternative to the SOI bonding-and-etch back process scheme for top channel fabrication.

Researchers from Intel Corp. and the University of California, Berkeley, are looking beyond current transistor technology and preparing the way for a new type of memory and logic circuit that could someday be in every computer on the planet.

In a paper appearing online Dec. 3 in advance of publication in the journal Nature, the researchers propose a way to turn relatively new types of materials, multiferroics and topological materials, into logic and memory devices that will be 10 to 100 times more energy-efficient than foreseeable improvements to current microprocessors, which are based on CMOS (complementary metal-oxide-semiconductor).

Single crystals of the multiferroic material bismuth-iron-oxide. The bismuth atoms (blue) form a cubic lattice with oxygen atoms (yellow) at each face of the cube and an iron atom (gray) near the center. The somewhat off-center iron interacts with the oxygen to form an electric dipole (P), which is coupled to the magnetic spins of the atoms (M) so that flipping the dipole with an electric field (E) also flips the magnetic moment. The collective magnetic spins of the atoms in the material encode the binary bits 0 and 1, and allow for information storage and logic operations. Credit: Ramamoorthy Ramesh lab, UC Berkeley

The magneto-electric spin-orbit or MESO devices will also pack five times more logic operations into the same space than CMOS, continuing the trend toward more computations per unit area, a central tenet of Moore’s Law.

The new devices will boost technologies that require intense computing power with low energy use, specifically highly automated, self-driving cars and drones, both of which require ever increasing numbers of computer operations per second.

“As CMOS develops into its maturity, we will basically have very powerful technology options that see us through. In some ways, this could continue computing improvements for another whole generation of people,” said lead author Sasikanth Manipatruni, who leads hardware development for the MESO project at Intel’s Components Research group in Hillsboro, Oregon. MESO was invented by Intel scientists, and Manipatruni designed the first MESO device.

Transistor technology, invented 70 years ago, is used today in everything from cellphones and appliances to cars and supercomputers. Transistors shuffle electrons around inside a semiconductor and store them as binary bits 0 and 1.

In the new MESO devices, the binary bits are the up-and-down magnetic spin states in a multiferroic, a material first created in 2001 by Ramamoorthy Ramesh, a UC Berkeley professor of materials science and engineering and of physics and a senior author of the paper.

“The discovery was that there are materials where you can apply a voltage and change the magnetic order of the multiferroic,” said Ramesh, who is also a faculty scientist at Lawrence Berkeley National Laboratory. “But to me, ‘What would we do with these multiferroics?’ was always a big question. MESO bridges that gap and provides one pathway for computing to evolve”

In the Nature paper, the researchers report that they have reduced the voltage needed for multiferroic magneto-electric switching from 3 volts to 500 millivolts, and predict that it should be possible to reduce this to 100 millivolts: one-fifth to one-tenth that required by CMOS transistors in use today. Lower voltage means lower energy use: the total energy to switch a bit from 1 to 0 would be one-tenth to one-thirtieth of the energy required by CMOS.

“A number of critical techniques need to be developed to allow these new types of computing devices and architectures,” said Manipatruni, who combined the functions of magneto-electrics and spin-orbit materials to propose MESO. “We are trying to trigger a wave of innovation in industry and academia on what the next transistor-like option should look like.”

Internet of things and AI

The need for more energy-efficient computers is urgent. The Department of Energy projects that, with the computer chip industry expected to expand to several trillion dollars in the next few decades, energy use by computers could skyrocket from 3 percent of all U.S. energy consumption today to 20 percent, nearly as much as today’s transportation sector. Without more energy-efficient transistors, the incorporation of computers into everything – the so-called internet of things – would be hampered. And without new science and technology, Ramesh said, America’s lead in making computer chips could be upstaged by semiconductor manufacturers in other countries.

“Because of machine learning, artificial intelligence and IOT, the future home, the future car, the future manufacturing capability is going to look very different,” said Ramesh, who until recently was the associate director for Energy Technologies at Berkeley Lab. “If we use existing technologies and make no more discoveries, the energy consumption is going to be large. We need new science-based breakthroughs.”

Paper co-author Ian Young, a UC Berkeley Ph.D., started a group at Intel eight years ago, along with Manipatruni and Dmitri Nikonov, to investigate alternatives to transistors, and five years ago they began focusing on multiferroics and spin-orbit materials, so-called “topological” materials with unique quantum properties.

“Our analysis brought us to this type of material, magneto-electrics, and all roads led to Ramesh,” said Manipatruni.

Multiferroics and spin-orbit materials

Multiferroics are materials whose atoms exhibit more than one “collective state.” In ferromagnets, for example, the magnetic moments of all the iron atoms in the material are aligned to generate a permanent magnet. In ferroelectric materials, on the other hand, the positive and negative charges of atoms are offset, creating electric dipoles that align throughout the material and create a permanent electric moment.

MESO is based on a multiferroic material consisting of bismuth, iron and oxygen (BiFeO3) that is both magnetic and ferroelectric. Its key advantage, Ramesh said, is that these two states – magnetic and ferroelectric – are linked or coupled, so that changing one affects the other. By manipulating the electric field, you can change the magnetic state, which is critical to MESO.

The key breakthrough came with the rapid development of topological materials with spin-orbit effect, which allow for the state of the multiferroic to be read out efficiently. In MESO devices, an electric field alters or flips the dipole electric field throughout the material, which alters or flips the electron spins that generate the magnetic field. This capability comes from spin-orbit coupling, a quantum effect in materials, which produces a current determined by electron spin direction.

In another paper that appeared earlier this month in Science Advances, UC Berkeley and Intel experimentally demonstrated voltage-controlled magnetic switching using the magneto-electric material bismuth-iron-oxide (BiFeO3), a key requirement for MESO.

“We are looking for revolutionary and not evolutionary approaches for computing in the beyond-CMOS era,” Young said. “MESO is built around low-voltage interconnects and low-voltage magneto-electrics, and brings innovation in quantum materials to computing.”