Tag Archives: letter-semi-tech

3D-Micromac AG (booth #1645 in the South Hall) this week introduced the microPREP 2.0 laser ablation system for high-volume sample preparation of metals, semiconductors, ceramics and compound materials for microstructure diagnostics and failure analysis (FA).

Built on a highly flexible platform with a small table-top footprint, the microPREP 2.0 allows for easy integration into FA workflows. Developed jointly with Fraunhofer Institute for Microstructure of Materials and Systems (IMWS), the microPREP 2.0 complements existing approaches to sample preparation such as focused ion beam (FIB) micromachining, offering up to 10,000 times higher ablation rates and therefore an order of magnitude lower cost of ownership (CoO) compared to FIB. As the first stand-alone, ultrashort pulsed laser-based tool for sample preparation, the microPREP 2.0 brings additional unique capabilities, such as enabling large-area and 3D-shape sampling to allow for more comprehensive testing of complex structures.

Cutting and preparing samples from semiconductor wafers, dies and packages for microstructure diagnostics and FA is an essential but time-consuming and costly step. The primary method of sample preparation used in semiconductor and electronics manufacturing today is FIB micromachining, which can take several hours to prepare a typical sample. FIB only allows for very small sample sizes, and precious FIB time is wasted by “digging” excavations needed for cross-sectional imaging in a scanning electron microscope or making a TEM lamella. Reaching larger depths or widths is severely restricted by the limited ablation rate.

3D-Micromac’s microPREP 2.0 significantly accelerates these critical steps, bringing sample preparation for semiconductor and materials research to a new level. By off-loading the vast majority of sample prep work from the FIB tool and relegating FIB to final polishing or replacing it completely depending on application, microPREP 2.0 reduces time to final sample to less than one hour in many cases.

“This award-winning tool brings unprecedented flexibility into sample prep. We at Fraunhofer IMWS are facing the need for targeted, artifact-free and most reliable preparation workflows to be able to serve our industry customers with cutting-edge microstructure diagnostics. Made for diverse techniques like SEM inspection of advanced-packaging devices, X-ray microscopy, atom probe tomography, and micro mechanics, microPREP was developed jointly with 3D-Micromac to close gaps in preparation workflows,” said Thomas Höche, Fraunhofer IMWS.

The microPREP 2.0 laser ablation system.

By Ed Korczynski

To fulfill the promise of the Internet of Things (IoT), the world needs low-cost high-bandwidth radio-frequency (RF) chips for 5th-generation (5G) internet technology. Despite standards not being completely defined yet it is clear that 5G hardware will have to be more complex than 4G kit, because it will have to provide a total solution that is ultra-reliable with at least 10 Gb/second bandwidth. A significant challenge remains in developing new high-speed transistor technologies for RF communications with low power to allow IoT “edge” devices to operate reliably off of batteries.

At the most recent Imec Technology Forum in Antwerp, Belgium, Nadine Collaert, Distinguished MTS of imec, discussed recent research results from the consortium’s High-Speed Analog and RF Program. In addition to working on core transistor fabrication technology R&D, imec has also been working on system-technology co-integration (STCO) and design-technology co-integration (DTCO) for RF applications.

Comparing the system specifications needed for mobile handsets to those for base-stations, transmitter power consumption should be 10x lower, while the receiver power consumption needs to be 2x lower. Today using silicon CMOS transistors, four power amplifiers alone consume 65% of a transmitter chip’s power. Heterogeneous Bipolar Transistors (HBT) and High Electron Mobility Transistors (HEMT) built using compound semiconductors such as gallium-arsenide (GaAs), gallium-nitride (GaN), or indium-phosphide (InP) provide excellent RF device results. However, compared to making CMOS chips on silicon, HBT and HEMT manufacturing on compound semiconductor substrates is inherently expensive and difficult.

Heterogeneous Bipolar Transistors (HBT) and High Electron Mobility Transistors (HEMT) both rely upon the precise epitaxial growth of semiconductor layers, and such growth is easier when the underlying substrate material has similar atomic arrangement. While it is much more difficult to grow epi-layers of compound semiconductors on silicon wafers, imec does R&D using 300-mm diameter silicon substrates with a goal of maintaining device quality while lowering production costs. The Figure shows cross-sections of the two “tracks” of III-V and GaN transistor materials being explored by imec for future RF chips.

III-V on Silicon and GaN-on-Silicon RF device cross-sections, showing work on both Heterogeneous Bipolar Transistors (HBT) and High Electron Mobility Transistors (HEMT) for 5G applications. (Source: imec)

Imec’s High-Speed Analog/RF Program objectives include the following:

  • High-speed III-V RF devices using low-cost, high-volume silicon-compatible processes and modules,
  • Co-optimization with advance silicon CMOS to reduce form factor and enable power-efficient systems with higher performance, and
  • Technology-circuit design co-optimization to enable complex RF-FEM modules with heterogeneous integration.

5G technology deployment will start with speeds below 6GHz,  because technologies in that range have already been proven and the costs are known. However, after five years the frequency will change to the “mm-wave” range with the first wavelength band at ~28GHz. GaN material with a wide bandgap and high charge-density has been a base-station technology, and it could be an ideal material for low-power mm-wave RF devices for future handsets.

This R&D leverages the III-V on silicon capability that has been developed by imec for CMOS:Photonic integration. RF transistors could be stacked over CMOS transistors using either wafer- or die-stacking, or both could be monolithically co-integrated on one silicon chip. Work on monolithic integration of GaN-on-Silicon is happening now, and could also be used for photonics where faster transistors can improve the performance of optical links.

By Pete Singer

Nitrous oxide (N2O) has a variety of uses in the semiconductor manufacturing industry. It is the oxygen source for chemical vapor deposition of silicon oxy-nitride (doped or undoped) or silicon dioxide, where it is used in conjunction with deposition gases such as silane. It’s also used in diffusion (oxidation, nitridation, etc.), rapid thermal processing (RTP) and for chamber seasoning.

Why these uses – and more importantly what happens to the gas afterward — may soon becoming under more scrutiny because it is being included for the first time in the IPPC (Intergovernmental Panel on Climate Change) GHG (Greenhouse Gas) guidelines. The IPCC has refined guidelines released in 2006 and expect to have a new revision in 2019. “Refined guidelines are actually up and coming and the inclusion of nitrous oxide in them is a major revision from the 2006 document,” said Mike Czerniak, Environmental Solutions Business development Manager, Edwards. Czerniak is on the IPPC committee and lead author of the semiconductor section.

Although the semiconductor industry uses a very small amount of N2O compared to other applications (dentistry, whip cream, drag racing, scuba diving), it is a concern because after CO2and CH4, N2O is the 3rd most prevalent man-induced GHG, accounting for 7% of emissions. According to the U.S. Environmental Protection Agency, 5% of U.S. N2O originates from industrial manufacturing, including semiconductor manufacturing.

Czerniak said the semiconductor industry been very proactive about trying to offset and reduce its carbon dioxide footprint. “The aspiration set by the world’s semiconductor council to reduce the carbon footprint of a chip to 30 percent of what it was in 2010, which itself was a massive reduction of what it used to be back in the last millennium,” he said. Unfortunately, although that trend had been going down for the first half of the decade, it started going up again in 2016. “although each individual processing step has a much lower carbon footprint than it used to have, the number of processing steps is much higher than they used to be,” Czerniak explain. “In the 1990s, it might take 300-400 processing steps to make a chip. Nowadays you’re looking at 2,000-4,000 steps.”

There are two ways of abating N20 so that it does not pollute the atmosphere: reduce it or oxidize it.  Oxidizing it – which creates NO2and NO (and other oxides know as NOx) — is not the way to go, according to Czerniak. “These oxides have their own problems. NOx is a gas that most countries are trying to reduce emissions of. It’s usually found as a byproduct of fuel combustion, particularly in things like automobiles and it adds to things like acid rain,” he said.

Edwards’ view is that it’s much better to minimize the formation of the NOx in the first place. “The good news is that it is possible inside a combustion abatement system where the gas comes in at the top, we burn a fuel gas and air on a combustor pad and basically the main reactant gas then is water vapor, which we use to remove the fluorine effluent, which is the one we normally try to get rid of from chamber cleans,” Czerniak said.

The tricky part is that information from the tool is required. “We can — when there is nitrous oxide present on a signal from the processing tool — add additional methane fuel into the incoming gas specifically to act as a reducing agent to reduce the nitrous oxide to nitrogen and water vapor,” he explained. “We inject it at just the right flow rate to effectively get rid of the nitrous oxide without forming the undesirable NOx byproducts.”

Figure 1 showshowcareful control of combustion conditions make them reduce rather than oxidizing during the N2O step by the addition of CH4. 30 slm N2O represents two typical process chambers.

“It’s not complicated technology,” Czerniak concluded. “You just have to do it right.”

By Pete Singer

In a keynote talk on Tuesday in the Yerba Buena theater, Dr. John E. Kelly, III, Senior Vice President, Cognitive Solutions and IBM Research, talked about how the era of Artificial Intelligence (AI) was upon us, and how it will dramatically the world. “This is an era of computing which is at a scale that will dwarf the previous era, in ways that will change all of our businesses and all of our industries, and all of our lives,” he said. “This will be another 50, 60 or more years of technology breakthrough innovation that will change the world.  This is the era that’s going to power our semiconductor industry forward. The number of opportunities is enormous.”

Dr. John E. Kelly, III, Senior Vice President, Cognitive Solutions and IBM Research

Kelly, with 40 years of experience in the industry, recalled how the first era of computing began with mechanical computers 100 years ago, and then transition into the programmable era of computing. In 1980, Kelly said “we were trying to stack two 16 kilobis DRAMs to get a 32 bit stack and we were trying to cram a thousand transistors into a microprocessor.” Microprocessors today have 15 billion transistors. “It’s been a heck of a ride,” he said.

IBM’s Summit is not only the biggest computer in the world, this is the smartest computer in the world, according to Kelly.

Kelly pointed to the power of exponentials, noting that Moore’s Law represented the first exponential and Metcalf’s Law — which says the value of the network increases as the square of the number of connected devices to the network – is the second exponential. Kelly said there’s no end to this second potential, as devices such as medical connected devices and Internet of thing devices get connected.

A third exponential is now upon us, Kelly said. “The core of this exponential is that data is doubling every 12 to 18 months. In fact, in some industries like healthcare, data is doubling every six months,” he said. The challenge is that the data is useless unless it can be analyzed. “Our computers are lousy in dealing with that large unstructured data and frankly there aren’t enough programmers in the world to deal with that explosion of data and extract value,” Kelly said. “The only way forward is through the use of machine learning and artificial intelligence to extract insights from that data.”

Kelly talked about IBM’s history of AI – teaching early system 600 machines to play checkers, beating chess grandmaster Gary Kasparov with Deep Blue, Watson’s Jeopardy wins and most recently, Watson Debater. That can “not only can answer questions but can listen to a person’s argument on something, reason and counter-argue in full natural language against that position in a full dialogue, continuously.”

What’s changed? “We continue to make advances in artificial intelligence, machine learning and deep learning algorithms that are just stunning,” Kelly said. “We are now able to learn over smaller and smaller amounts of data and translate that learning from one domain to another to another to another and start to get scale. Now is the time when this exponential is going to really explode.”

How does that equate to opportunity? Kelly said that on top of the existing $1.5-2B information technology industry, there’s another $2 trillion of decision support opportunity for artificial intelligence. “Literally every industry in the world, whether its industrial products, financial services, retail, every industry in the world is going to be impacted and transformed by this,” he said.

Quantum computing, which Kelly describe as a fourth exponential, is also coming which will in turn dwarf all of the previous ones. “Beyond AI, this is going to be the most important thing I’ve ever seen in my career. Quantum computing is a complete game changer,” he said.

The bad news? During his talk, Kelly sounded one cautionary note: “Companies that lead exponentials win. Companies that don’t lead, or even try to quickly follow, fail on exponential curves. Our industry is littered with examples of that,” he said.

By Pete Singer

Many new innovations were discussed at imec’s U.S. International Technology Forum (ITF) on Monday at the Grand Hyatt in San Francisco, including quantum computing, artificial intelligence, sub-3nm logic, memory computing, solid-state batteries, EUV, RF and photonics, but perhaps the most interesting was new technology that enables human cells, tissues and organs to be grown and analyzed on-chip.

After an introduction by SEMI President Ajit Monacha – who said he believes the semiconductor industry will reach $1 trillion in market size by 2030 (“there’s no shortage of killer applications,” he said) — Luc Van den hove, president and CEO of imec, kicked off the afternoon session speaking about many projects underway that bring leading microelectronics technologies to bear on today’s looming healthcare crisis. “We all live longer than ever before and that’s fantastic,” he said. “But by living longer we also spend a longer part of our life being ill. What we need is a shift from extending lifespan to extending healthspan. What we need is to find ways to cure and prevent some of these diseases like cancer, like heart diseases and especially dementia.”

Today, drug development is so time-consuming and costly, is because of the insufficiency of the existing methodologies for drug screening assays. These current assays are based on poor cell models that limit the quality of the resulting data, and result in inadequate biological relevance. Additionally, there is a lack of spatial resolution of the assays, resulting in the inability to screen single cells in a cell culture. “It is rather slow, it is quite labor intensive and it provides limited information,” Van den hove said. “With our semiconductor platform we have developed recently a multi-electrode array (MEA) chip on which we can grow cells, in which we can grow tissue and organs. We can monitor processes that are happening within the cells or between the cells during massive drug testing.”

The MEA (see Figure) packs 16,384 electrodes, distributed over 16 wells, and offers multiparametric analysis. Each of the 1,024 electrodes in a well can detect intracellular action potentials, aside from the traditional extracellular signals. Further, imec’s chip is patterned with microstructures to allow for a structured cell growth mimicking a specific organ.

A novel organ-on-chip platform for pharmacological studies with unprecedented signal quality. It fuses imec’s high-density multi-electrode array (MEA)-chip with a microfluidic well plate, developed in collaboration with Micronit Microtechnologies, in which cells can be cultured, providing an environment that mimics human physiology.

Earlier this year, in May at imec’s ITF forum in Europe, Veerle Reumers, project leader at imec, explained how the MEA works: “By using grooves, heart cells can for example grow into a more heart-like tissue. In this way, we fabricate miniature hearts-on-a-chip, making it possible to test the effect of drugs in a more biologically relevant context. Imec’s organ-on-chip platform is the first system that enables on-chip multi-well assays, which means that you can perform different experiments or – in other words – analyze different compounds, in parallel on a single chip,” he explained. “This is a considerable increase in throughput compared to current single-well MEAs and we aim to further increase the throughput by adding more wells in a system.”

Van den hove said they have been testing the chip. “The beauty of the semiconductor platform is that we can, because of the miniaturization capability, parallelize an enormous amount of this testing and accelerate drug testing. We can measure what we never measured before, at speeds that you couldn’t think of before.”

He added that imec recently embarked on a new initiative aimed to cure dementia called Mission Lucidity. “Together with some of our clinical biomedical research teams, we are on a mission to decode dementia, to develop a cure to prevent this disease,” he said.

The MEA will be one tool used in the initiative, but also coming into play will be the groups neuroprobes — which Van den hove said are among the world’s most advanced probes and are being used by nearly all the leading neuroscience research teams – along with next generation wearables. “By combining these tools, we want to better understand the processes that are happening in the brain. We can measure those processes with much higher resolution than what could be done before. This may be able to detect the onset disease earlier on. By administering the right medication earlier, we hope to be able to prevent the disease from further progressing,” he said.

Intel has won SEMI’s 2018 Award for the Americas. SEMI honored the celebrated chipmaker for pioneering process and integration breakthroughs that enabled the first high-volume Integrated Silicon Photonics Transceiver. The award was presented yesterday at SEMICON West 2018.

SEMI’s Americas Awards recognize technology developments with a major impact on the semiconductor industry and the world.

The Intel® Silicon Photonics 100G CWDM4 (Coarse Wavelength Division Multiplexing 4-lane) QSFP28 optical transceiver, a highly integrated optical connectivity solution, combines the power of optics and the scalability of silicon. The small form-factor, high-speed, low-power consumption 100G optical transceivers are used in optical interconnects for data communications applications, including large-scale cloud and data centers, and in Ethernet switch, router, and client telecommunications interfaces.

Dr. Thomas Liljeberg, senior director of R&D for Intel Silicon Photonics, accepted the award on behalf of Intel. Dr. Liljeberg is one of the technologists responsible for bringing Intel’s silicon photonics 100G transceivers to high-volume production.

“Every year SEMI honors key technological contributions and industry leadership through the SEMI Award,” said David Anderson, president, SEMI Americas. “Intel was instrumental in delivering technologies that will influence product design and system architecture for many years to come. Congratulations to Intel for this significant accomplishment.”

“The 2018 Award recognizes the enablement of high-volume manufacturing through technology leadership and collaboration with key vendors in the supply chain,” said Bill Bottoms, chairman of the SEMI Awards Advisory Committee. “Intel’s collaboration is a model for how the industry can accelerate innovation in the future.”

SEMI established the SEMI Award in 1979 to recognize outstanding technical achievement and meritorious contributions in the areas of Semiconductor Materials, Wafer Fabrication, Assembly and Packaging, Process Control, Test and Inspection, Robotics and Automation, Quality Enhancement, and Process Integration.

The SEMI Americas award is the highest honor conferred by the SEMI Americas region. It is open to individuals or teams from industry or academia whose specific accomplishments have a broad commercial impact and widespread technical significance for the entire semiconductor industry. Nominations are accepted from individuals of North American-based member companies of SEMI. For a list of past award recipients, visit www.semi.org/semiaward.

BY DEBRA VOGLER, SEMI, Milpitas, CA

With chipmakers looking toward 5nm manufacturing, it’s clear that traditional scaling is not dead but continuing in combination with other technologies. The industry sees scaling enabled by 3D architectures such as die stacking and the stacking of very small geometry wafers. Interconnect scaling also comes into play. This year’s Scaling Technologies TechXPOT at SEMICON West (Scaling Every Which Way! – Thursday, July 12, 2:00PM-4:00PM) will provide an update on the evolution of scaling and describe how the various players (foundry, IDM, fabless, and application developers) are jockeying for innovation leadership. As a prelude to the event, SEMI asked speakers to provide insights on important scaling trends. For a full list of speakers and program agenda, visit http://www.semiconwest.org/programs-catalog/scaling-every-which-way.

Challenges for gate-all-around (GAA) and FinFET devices

Common performance boosters for gate-all-around (GAA) FETs and FinFETs include lower access resistance, lower parasitic capacitance, and stress. “However, one specific performance booster that only applies to GAA is the reduction of the spacing between the vertical wires or sheets,” says Diederik Verkest, imec distinguished member of technical staff, Semiconductor Technology and Systems.

“This reduces parasitic capacitance without affecting drive current and hence benefits both performance and power.” He further notes that imec demonstrated the first stacked gate-all-around (GAA) devices in scaled nodes. “In fact, we are the only ones that published working circuits – ring oscillators in a scaled node using industry-standard processes – in our case replacement metal gate (RMG), and embedded in situ doped source/drain (S/D) epitaxy.”

“There are two elements of the stacked GAA architecture that need to be addressed,” says Verkest. “The first is that this architecture uses epitaxially-grown layers of Si and SiGe to define the device channel. The use of grown materials for the channel and the lattice mismatch between the two materials represent a departure from the traditional fabrication of CMOS devices, so the industry needs to develop and gain confidence in novel metrology that allows for good control of the layers and also proves their low defectivity.” The second aspect is the three-dimensional nature of the GAA devices. “During the processing of these devices, we have ‘non-line-of-sight’ hidden features that are difficult to control and characterize and may also lead to new defect mechanisms that would impact yield, and possibly product reliability.”

Huiming Bu, director, Advanced Logic/Memory Research – Integration and Device, IBM Research, Semiconductor Group, says that naming of technology nodes has been used extensively for marketing strategies in “foundry land,” but the designations have lost much of their meaning as technology scaling differentiators. “That said, when it comes to technology innovation and value proposition, IBM, in conjunction with Samsung and GLOBALFOUNDRIES, has developed the GAA NanoSheet transistor for 5nm to provide a full technology node scaling benefit in density, power and performance,” says Bu (FIGURE 1). The key parameters for intrinsic device optimization when scaling to the 3nm node, explains Bu, are the NanoSheet width for better electrostatic characteristics, and the number of sheets for increased current density. Also necessary are strain engineering for carrier transport enhancement, and interconnect innovations for parasitic RC reduction.

“Beyond that, the industry needs to look into something different, something more disruptive.”

Materials challenges

Materials challenges are also a concern as the industry moves to 5nm and below. “We see increasing complexity in the material systems that are being used,” explains Verkest. One example he cites in scaled FinFET or GAA technologies is the use of two to three layers of different materials–typically metals such as TiN – to which small amounts of other elements are added to set device characteristics such as the threshold voltage. “At the same time, the requirements for the thickness of these materials, driven by gate dimensions for example, or the distance between the wires, are increasingly challenging.” Other examples of materials challenges are the use of two to three different types of insulators in the middle-of-the- line, each with different etch contrasts. “We use novel materials such as carbon containing oxides or oxynitrides that have lower dielectric constants in order to boost the performance of circuits,” he says, noting that the materials list “is quite long.”

Several critical dimensions in transistors at advanced technology nodes have already reached a few monolayers of atoms, fueling expectations for innovation at the material level for transistor scaling, Bu notes. “The other argument is that there is a growing gap between computing demand and the slowdown of technology advancement driven by conventional scaling,” says Bu. One trend that addresses this gap is integrating more computing functions that make the technology solution more modular, which naturally leads to the incorporation of more materials for more applications. Bu cautions, however, that intro- ducing new materials in semiconductor technology has never been easy. “It takes many years of R&D to reach this implementation point, if it ever happens. So, do we need new materials when the industry moves to 5nm and 3nm? Yes, though I expect new material implementation to be a lot faster in interconnect and packaging at these nodes rather than intrinsic to the transistor.”

Challenges in developing atomic-level processes

There will be challenges in developing atomic-level processes used in scaling, such as atomic layer depositions (ALD) and atomic layer etches, notes Verkest. “These classes of processes are both required to handle the scaled dimensions at the 5nm and 3nm nodes, and also the 3D nature of the scaled technologies – and here we are talking about logic and memories,” Verkest says. “With respect to depositions, we would need to develop thermal ALD processes (not plasma-based) that enable accurate and conformal depositions in non-line-of-sight structures.”

Adhesion and wetting, smoothness, and throughput would also need to be addressed. “Longer term, these processes need to facilitate selectivity and self-alignment to address gap-fill challenges in highly scaled structures,” he says. Other concerns he notes with respect to atomic layer etches are selectivity to various materials, and fidelity requirements that increase the requirements for metrology accuracy. “Throughput is also a concern.”

Bu believes that a new device architecture beyond FinFET is required to provide a full technology node scaling benefit (i.e., density, power and performance) at 5nm and 3nm.
“Beyond 3nm, we may need to continue the transistor scaling in the vertical direction and start to stack them together,” Bu says. He also cites the need for parasitic R/C reduction in the interconnect to take advantage of the intrinsic transistor benefit at the circuit and chip levels. “We see a lot of opportunity in atomic-level processes, especially in atomic layer etch and selective material deposition, to address these challenges in the transistor and the interconnect.”

By Dave Lammers

The semiconductor industry is collecting massive amounts of data from fab equipment and other sources. But is the trend toward using that data in a Smart Manufacturing or Industry 4.0 approach happening fast enough in what Mike Plisinski, CEO of Rudolph Technologies, calls a “very conservative” chip manufacturing sector?

“There are a lot of buzzwords being thrown around now, and much of it has existed for a long time with APC, FDC, and other existing capabilities. What was inhibiting the industry in the past was the ability to align this huge volume of data,” Plisinskisaid.

While the industry became successful at adding sensors to tools and collecting data, the ability to track that data and make use of it in predictive maintenance or other analytics thus far “has had minimal success,” he said. With fab processes and manufacturing supply chains getting more complex, customers are trying to figure out how to move beyond implementing statistical process control (SPC) on data streams.

What is the next step? Plisinski said now that individual processes are well understood, the next phase is data alignment across the fab’s systems. As control of leading-edge processes becomes more challenging, customers realize that the interactions between the process steps must be understood more deeply.

“Understanding these interactions requires aligning these digital threads and data streams. When a customer understands that when a chamber changes temperature by point one degrees Celsius, it impacts the critical dimensions of the lithography process by X, Y, and Z. Understanding those interactions has been a significant challenge and is an area that we have focused on from a variety of angles over the last five years,” Plisinski said.

Rudolph engineers have worked to integrate multiple data threads (see Figure), aligning various forms of data into one database for analysis by Rudolph’s Yield Management System (YMS). “For a number of years we’ve been able to align data. The limitation was in the database: the data storage, the speed of retrieval and analysis were limitations. Recently new types of databases have come out, so that instead of relational, columnar-type databases, the new databases have been perfect for factory data analysis, for streaming data. That’s been a huge enabler for the industry,” he said.

Rudolph engineers have worked to integrate multiple data threads into one database.

Leveraging AI’s capabilities

A decade ago, Rudolph launched an early neural-network based system designed to help customers optimize yields. The software analyzed data from across a fab to learn from variations in the data.

“The problem back then was that neural networks of this kind used non-linear math that was too new for our conservative industry, an industry accustomed to first principle analytics. As artificial intelligence has been used in other industries, AI is becoming more accepted worldwide, and our industry is also looking at ways to leverage some of the capabilities of artificial intelligence,” he said.

Collecting and making use of data with a fab is “no small feat,” Plisinskisaid, but that leads to sharing and aligning data across the value chain: the wafer fab, packaging and assembly, and others.

“To gain increased insights from the data streams or digital threads, to bring these threads all together and make sense of all of it. It is what I call weaving a fabric of knowledge: taking individual data threads, bringing them together, and weaving a much clearer picture of what’s going on.”

Security concerns run deep

One of the biggest challenges is how to securely transfer data between the different factories that make up the supply chain. “Even if they are owned by one entity, transferring that large volume of data, even if it’s over a private dedicated network, is a big challenge. If you start to pick and choose to summarize the data, you are losing some of the benefit. Finding that balance is important.”

The semiconductor industry is gaining insights from companies analyzing, for instance, streaming video. The network infrastructures, compression algorithms, transfers of information from mobile wireless devices, and other technologies are making it easier to connect semiconductor fabs.

“Security is perhaps the biggest challenge. It’s a mental challenge as much as a technical one, and by that I mean there is more than reluctance, there’s a fundamental disdain for letting the data out of a factory, for even letting data into the factory,” he said.

Within fabs, there is a tug of war between equipment vendors which want to own the data and provide value-add services, and customers who argue that since they own the tools they own the data. The contentious debate grows more intense when vendors talk about taking data out of the fab. “That’s one of the challenges that the industry has to work on — the concerns around security and competitive information getting leaked out.” Developing a front-end process is “a multibillion dollar bet, and if that data leaks out it can be devastating to market-share leadership,” Plisinski said.

Early adopter stories

The challenge facing Rudolph and other companies is to convince their customers of the value of sharing data; that “the benefits will outweigh their concerns. Thus far, the proof of the benefit has been somewhat limited.”

“At least from a Rudolph perspective, we’ve had some early adopters that have seen some significant benefits. And I think as those stories get out there and as we start to highlight what some of these early adopters have seen, others at the executive level in these companies will start to question their teams about some of their assumptions and concerns. Eventually I think we’ll find a way forward. But right now that’s a significant challenge,”Plisinski said.

It is a classic chicken-and-egg problem, making it harder to get beyond theories to case-study benefits. “What helped us is that some of the early adopters had complete control of their entire value chain. They were fully integrated. And so we were able to get over the concerns about data sharing and focus on the technical challenges of transferring all that data and centralizing it in one place for analytical purposes. From there we got to see the benefits and document them in a way that we could share with others, while protecting IP.”

Aggregating data, buying databases and analytical software, building algorithms – all cost money, in most cases adding up to millions of dollars. But if yields improve by .25 or half a percent, the payback comes in six to eight months, he said.

“It’s a very conservative industry, an applied science type of industry. Trying to prove the value of software — a kind of black magic exercise — has always been difficult. But as the industry’s problems have become so complex, it is requiring these sophisticated software solutions.”

“We will have examples of successful case studies in our booth during SEMICON West. Anyone wanting further information is invited to stop by and talk to our experts,” adds Plisinski.

By Ed Korcynzski

Industry R&D consortium imec runs a series of technology forums around the world, starting in June in Antwerp, Belgium, and including a stop in July in San Francisco in coordination with SEMICON West. Greg McIntyre, imec Director of Advanced Patterning, discussed the state-of-the-art in Extreme Ultra-Violet (EUV) lithography technology with Solid State Technology during the Antwerp event. While still focusing on “path-finding” R&D for industry, the recent technology challenges associated with commercializing EUV lithography has pulled imec into work on patterning ecosystem materials such as resists and pellicles.

With each NXE:3400B model EUV stepper from ASML valued at US$125 million it costs $1 billion to invest in a set of 8 tools to begin high-volume manufacturing, and the entire lithography materials supply-chain is engaged in improving availability and throughput of this expensive tool-set. For high performance logic ICs we need EUV to reach the smallest and most powerful FETs possible, so EUV is in pilot production for logic chips at Samsung and TSMC this year, and will likely begin pilot ramps at Intel and GlobalFoundries next year.

The first use of EUV in IC HVM will be as “cut-masks” for use in self-aligned multi-patterning (SAMP) process flows that start with argon-fluoride-immersion (ArFi) deep ultra-violet (DUV) steppers. Such a first use allows for substitution of three ArFi “multi-color” cut-masks in place of the one EUV mask, in case there are unanticipated issues with the new EUV steppers. Second use in HVM will then happen using a single-exposure of EUV to pattern metal layers, but with no ability to use multiple ArFi exposure as a back-up.

“We will not put EUV in our critical path,” commented Dr. Gary Patton, GlobalFoundries’ CTO and SVP of Worldwide R&D, during a presentation in Antwerp, “But it’s clear that it’s coming and it will offer compelling advantages.” Patton said the company is experimenting with two of ASML’s EUV steppers in a New York fab, and will launch the company’s “7-nm-node” finFET production first with ArFi and then move to EUV when the throughput and uptime of the process make it affordable in their cost models.

Figure 1 shows the extremely small patterning process window around 18nm half-pitch line arrays (P36) using EUV lithography with Dipole source-mask optimization (SMO):  micro-bridging between lines starts below 15.5nm, while breaks within lines start above 18nm. These stochastic failures (Ref:  “Waddle-room for Black Swans:  EUV Stochastics”, SemiMD.com) are caused by variations in the photons absorbed by the resist (a.k.a. “shot noise”), the quantum efficiency of photo-acid generation (PAG) and diffusion, thequencher distribution,and optical and chemical interactions with under-layers for adhesion, anti-reflective coatings, and hardmasks.

Figure 1. Stochastic failures due to atomic-scale variability are shown in top-down CD-SEM images taken from 36-nm Pitch (P36) line/space arrays of post-etched photoresist that had been patterned using EUV lithography, which define the limits of the patterning process window when plotted as Percent Not-OK (%NOK) within an inspected area. (Source: imec)

Every nanometer of resolution is difficult to achieve when patterning below 20nm half-pitch, with many parameters contributing noise to the signal. For EUV lithography using reflective optics, the mask surface causes undesired “flare” reflections from the un-patterned area, such that bright-field masks inherently distort images more than dark-field masks. Since cuts typically only expose <20% of the field, these masks will be much less noisy as dark-fields.

Given the need for dark-field cut-masks, the ideal photoresist will be positive-tone (PT) which means that reformulations of Chemically-Amplified Resists (CAR) based on organic molecules can be used. Standard organic CAR tuned for ArFi lithography provides some sensitivity to EUV, and blends of standard CAR molecules can be tuned to improve trade-offs within the inherent Resolution, Line-Edge-Roughness (LER), and Sensitivity trade-off triangle. Consequently, all of the suppliers of ArFi CAR are capable of supplying some EUV CAR. Since stochastic effects are interdependent, resist vendors have to explore integration options within the entire stack of patterning materials.

JSR co-founded with imec the EUV Resist Manufacturing & Qualification Center NV (EUV RMQC) in Leuven, Belgium, where an EUV stepper at imec is available for experiments. “RMQC is running at full speed, and shipping out production lots,” said McIntyre. “Intel’s Britt Turkot mentioned at SPIE this year that the resist qualification work being done at IMEC has been very beneficial.”

ASML now owns the critical-dimension scanning-electron microscopy (CD-SEM) technology of Hermes Microvision Inc (HMI), and Neal Callan, ASML’s Vice President of Pattern Fidelity Metrology, spoke with Solid State Technologyabout controlling EUV patterning. Electron-beams cause shrinkage in organic films like CAR, and that shrinkage results in a CD bias that can be more than one nanometer. Different CAR formulations from different vendors shrink at different rates, and the effect is more difficult to model in 2D structures. ”We’re being pushed for accurate metrology in terms that can be quantified,” explained Callan. “The biggest issues are in terms of CD-bias with 2D features. We need to build more accurate models to create better data for OPC and for computational lithography, and also for our etch modeling peers.”

“Design rules for EUV need to be stochastically aware,“ confided McIntyre. “Designers need to know how much can be sacrificed in a design rule such as tip-to-tip spacing depending on the pattern pitch. There are different ways that we can think about minimizing stochastic effects.”

While stochastics and systematic yield losses increase in relative importance with decreasing device dimensions, losses due to random defects are also more difficult to control. Figure 2 shows second-generation EUV pellicles made from carbon nano-tubes (CNT) by imec to protect EUV masks from random particles while transmitting ~95%. First-generation pellicles reportedly transmit <90%.

Figure 2. Second generation EUV pellicles based on carbon nano-tubes (CNT) demonstrate increased transmission of ~95% while maintaining sufficient mechanical stability to protect reticles. (Source: imec)

“Today, new purity challenges are not only faced by the fab but also by their materials suppliers driving sharp increases in the use of filtration and purification systems to prevent wafer defects and process excursions,” explained Clint Harris, Senior Vice President and General Manager, Microcontamination Control Division, Entegris, to Solid State Technology.“The transition from 45nm- to 10nm-node has resulted in a 2.5x increase in the changeout frequency of filters as well as a 4x reduction of maximum allowable contaminant size. This trend is expected to continue as device parametric performance becomes more sensitive to particles, gels, metals, mobile ions, and other organic contaminants.

[As a TECHCET Analyst, Ed Korczynski writes the TECHCET Critical Materials Report (CMR) on Photoresists & Ancillaries. https://techcet.com/product/photoresists-and-photoresist-ancillaries/]

By Paula Doe, SEMI

The fast-maturing infrastructure that is enabling applications for big data and artificial intelligence means disruptive change not just at individual companies but also in data connections among companies across the microelectronics manufacturing value chain. SEMI checked in with some leading players on the changes they see coming in the next several years for this article series. The trade group is expanding its programming on smart manufacturing to address these industry-wide developments at SEMICON West, July 10-12 in San Francisco.

“The ramp of EUV, and the smaller geometries and smaller process margins, will drive an exponential increase in the amount of metrology data to manage,” says Neal Callan, ASML vice president, Silicon Valley. Callan notes that moving to multibeam e-beam inspection will increase data volume from megabytes per second to gigabytes per second and from thousands of data points to millions of data points. “The process is so tight and the margin so small that stochastic variation, or noise, becomes more dominant – at least it’s noise until we can learn to understand and control it. And understanding and controlling this variation will be key to delivering 5nm patterning,” he says.

Single-beam e-beam inspection is already driving large increases in data as engineers extend the slow technology to broad, high-speed defect metrology applications by more intelligently instructing the system where to look for problems. Callan says ASML is now using the scanner data on wafer focus, alignment and leveling. The company is also using the computational lithography model from the design to identify the smallest process windows in the pattern that are most likely to see problems. The model then quantifies the number and significance of those instances.

“The collection of all this diverse data means that tools will need to be plug-and-play so all tool data is instantly available to all systems and software,” says Doug Suerich, PEER Group product evangelist. “We need tools that can be discovered automatically by the network so it can start slurping up data immediately. The adoption of the Interface A (EDA) standard is accelerating and fabs are starting to ask for it. The proliferation of sensors also needs to self-discover. If you are going to add thousands of new sensors into a facility, you can’t afford a time-consuming integration process.”

“We are now seeing that engineers are greedy for more data – if they can get the data, it’s becoming a need-to-have,” adds Tom Ho, BISTel America president. “Getting more data from more sensors, from the sensors on the tool that are not being fully utilized, and from untapped data sources like vibration is another big coming opportunity.”

Process complexity drives demand for feed forward between silos with computational models

ASML co-optimizes its scanner process with etch and reticle process steps. Source: ASML

In addition to the drive for trace-back of data, the increasing complexity of interrelated processes is also driving demand for feed-forward of data. “Feed-forward is becoming more important,” notes Ho. He points to the example of 3D NAND features, now getting so deep that identifying the layer being measured is a challenge unless the signal at the step before can be recognized.

“We need partnerships with our peers to understand how to take advantage of the sensors they use, integrate them with our data, and then feed-forward corrections to the other systems,” concurs Callan. “To drive the best CD uniformity and overlay, we need to co-optimize litho and etch,” agrees Henk Niesing, ASML director of product management. He notes that the company is working with etcher makers to measure the overlay and CD, decompose the finger prints, and then use models to steer automated control that best adjusts both the scanner and the etcher. ASML is also working with Zeiss on co-optimization between the scanner and the reticle to make even higher-order corrections by locally modifying the reticle.

These higher-order corrections, applied on each exposed field, drive the need for even more data, and at higher speed but without higher cost, notes Jan Mulkens, ASML senior fellow. These corrections increase demand for computational metrology, which combines various metrology sources with physics and deep learning models trained on real data to predict and control process results in real time. “We’re working on computational metrology to ideally use all the knobs we have in the fab,” he says.

So far this effort has largely involved linking data between two companies. More consistent data formats would enable data exchange to be extended to more companies. “The software versions also need to be managed for upgrades so they still match after one party updates the system on its tool,“ notes Niesing.

Speakers on these issues of smart manufacturing and data handling at SEMICON West include Active Layer Parametrics, Applied Materials, Applied Research & Photonics, ASML, Cimetrix, Coventor, ECI Technologies, Edwards Vacuum, Final Phase Systems, GE Digital,  Infineon, Jabil, Lam Research, Osaro, Otosense, PEER Group, Rockwell Automation, Rudolph Technologies, Schneider Electric, Seagate, Seimens, Stanford University, TEL, TIBCO Software. See semiconwest.org