Tag Archives: letter-pulse-top

Most people have felt that sting from grabbing a doorknob after walking across a carpet or seen how a balloon will stick to a fuzzy surface after a few moments of vigorous rubbing.

While the effects of static electricity have been fascinating casual observers and scientists for millennia, certain aspects of how the electricity is generated and stored on surfaces have remained a mystery.

Now, researchers have discovered more details about the way certain materials hold a charge even after two surfaces separate, information that could help improve devices that leverage such energy as a power source.

“We’ve known that energy generated in contact electrification is readily retained by the material as electrostatic charges for hours at room temperature,” said Zhong Lin Wang, Regents’ Professor in the School of Materials Science and Engineering at the Georgia Institute of Technology. “Our research showed that there’s a potential barrier at the surface that prevents the charges generated from flowing back to the solid where they were from or escaping from the surface after the contacting.”

Georgia Tech professor Zhong Lin Wang poses with an array of 1,000 LED lights that can be illuminated by power produced by the force of a shoe striking a triboelectric generator placed on the floor. (Credit: Rob Felt, Georgia Tech).

Georgia Tech professor Zhong Lin Wang poses with an array of 1,000 LED lights that can be illuminated by power produced by the force of a shoe striking a triboelectric generator placed on the floor. (Credit: Rob Felt, Georgia Tech).

In their research, which was reported in March in the Advanced Materials, the researchers found that electron transfer is the dominant process for contact electrification between two inorganic solids and explains some of the characteristics already observed about static electricity.

“There has been some debate around contact electrification – namely, whether the charge transfer occurs through electrons or ions and why the charges retain on the surface without a quick dissipation,” Wang said.

It’s been eight years since Wang’s team first published research on triboelectric nanogenerators, which employ materials that create an electric charge when in motion and could be designed to harvest energy from a variety of sources such as wind, ocean currents or sound vibrations.

“Previously we just used trial and error to maximize this effect,” Wang said. “But with this new information, we can design materials that have better performance for power conversion.”

The researchers developed a method using a nanoscale triboelectric nanogenerator – composed of layers either of titanium and aluminum oxide or titanium and silicone dioxide – to help quantify the amount of charge accumulating on surfaces during moments of friction.

The method was capable of tracking the accumulated charges in real time and worked over a wide range of temperatures, including very high ones. The data from the study indicated that the characteristics of the triboelectric effect, namely, how electrons flowed across barriers, were consistent with the electron thermionic emission theory.

By designing triboelectric nanogenerators that could withstanding testing at high temperatures, the researchers also found that temperature played a major role in the triboelectric effect.

“We never realized it was a temperature dependent phenomenon,” Wang said. “But we found that when the temperature reaches about 300 Celsius, the triboelectric transfer almost disappears.”

The researchers tested the ability for surfaces to maintain a charge at temperatures ranging from about 80 degrees Celsius to 300 degrees Celsius. Based on their data, the researchers proposed a mechanism for explaining the physics process in triboelectrification effect.

“As the temperature rises, the energy fluctuations of electrons become larger and larger,” the researchers wrote. “Thus, it is easier for electrons to hop out of the potential well, and they either go back to the material where they came from or emit into air.”

By Emmy Yi

The solar energy sector shined in a global renewable energy market that maintained steady growth last year despite the United States’ shocking withdrawal from the Paris Agreement. Solar panel costs dropped to an all-time low, driving global demand that surpassed the 100GW mark for the first time on the strength of standout annual 26 percent growth.

Taiwan has vigorously pursued a transition to renewable energy since 2016. Most notably, Taiwan is phasing out nuclear power as it increases its reliance on climate-friendly energy sources and seeks more foreign investment. The hope is also to boost economic growth and create more jobs.

With its limited land space, the region is fertile ground for rooftop photovoltaic system (PV) systems. In 2016, the Taiwan government set out on an ambitious plan to achieve 3,000MW of installed capacity by 2020 – enough to supply electricity for 1 million households while improving air quality, help spruce up the urban landscape and generate jobs.

The SEMI Taiwan Energy Group fully backs the government renewable-energy policy. Earlier this year, the group gathered more than 200 industry professionals and government officials to explore challenges and opportunities in deploying more rooftop PV systems. Here are some key takeaways:

Infrastructure Reliability Key to High Return on Investment

Size, reliability and safety are paramount in rooftop PV system design. To make the best use of space, reduce the cost per kWh, and ensure a long-term, stable supply of electric energy, the PC modules must be:

  • Compact to fit within limited rooftop space
  • Robust to endure extreme temperatures over long periods; resist fire, salt and water damage; and ensure safe, reliable operation

Financial Institutions Play an Important Role

In response to the government energy policy, domestic financial institutions have funded select projects or issued bonds and derivative products to support the development of Taiwan’s renewables industry. A key part of these efforts is to evaluate risks in areas such as system module safety, maturity of technologies and designs, energy-generating efficiency and maintenance costs.

A Truly Green Industry: Circular Economy

Energy storage systems are maturing rapidly to support expanding markets for renewable energy products. The market for home renewable energy systems is growing, fueled in part by low prices, and the adoption of electric vehicles continues to rise as advances in energy storage technology drive down costs and enable longer ranges. At the current pace of technological development, the world could be using 100 percent renewable energy to achieve the goal of zero emission by 2025. However, to achieve a truly pollution-free environment, a circular economy – marked by the regeneration and reuse of resources – must be established.

For its part, the SEMI Taiwan Energy Group this year will transform the 11-year-old PV Taiwan exhibition into Energy Taiwan, Taiwan’s largest international platform for facilitating communication and collaboration of the entire renewable energy ecosystem. Exhibition themes will range from solar energy, wind energy, hydrogen energy and fuel cells to green transportation, smart energy storage and green finance. The event reflects the consolidation of the SEMI Taiwan Energy Group’s growing resources and its commitment to a circular economy free of fossil fuels.

Originally published on the SEMI blog.

Semiconductors–a class of materials that can function as both electrical conductor and insulator, depending on the circumstances–are an essential technology for all modern electronic innovations.

Silicon has long been the most famous semiconductor, but in recent years researchers have studied a wider range of materials, including molecules that can be tailored to serve specific electronic needs.

Perhaps appropriately, one of the most cutting-edge electronics–supercomputers–are indispensable research tools for studying complex semiconducting materials at a fundamental level.

Recently, a team of scientists at TU Dresden used the SuperMUC supercomputer at the Leibniz Supercomputing Centre to refine its method for studying organic semiconductors.

Illustration of a doped organic semiconductor based on fullerene C60 molecules (green). The benzimidazoline dopant (purple) donates an electron to the C60 molecules in its surrounding (dark green). These electrons can then propagate through the semiconductor material (light green). Credit: S. Hutsch/F. Ortmann, TU Dresden

Illustration of a doped organic semiconductor based on fullerene C60 molecules (green). The benzimidazoline dopant (purple) donates an electron to the C60 molecules in its surrounding (dark green). These electrons can then propagate through the semiconductor material (light green). Credit: S. Hutsch/F. Ortmann, TU Dresden

Specifically, the team uses an approach called semiconductor doping, a process in which impurities are intentionally introduced into a material to give it specific semiconducting properties. It recently published its results in Nature Materials.

“New kinds of semiconductors, organic semiconductors, are starting to get used in new device concepts,” said team leader Dr. Frank Ortmann. “Some of these are already on the market, but some are still limited by their inefficiency. We are researching doping mechanisms–a key technology for tuning semiconductors’ properties–to understand these semiconductors’ limitations and respective efficiencies.”

Quantum impurities

When someone changes a material’s physical properties, he or she also changes its electronic properties and, therefore, the role it can play in electronic devices. Small changes in material makeup can lead to big changes in a material’s characteristics–in certain cases one slight atomic alteration can lead to a 1000-fold change in electrical conductivity.

While changes in material properties may be big, the underlying forces–exerting themselves on atoms and molecules and governing their interactions–are generally weak and short-range (meaning the molecules and the atoms of which they are composed must be close together). To understand changes in properties, therefore, researchers have to accurately compute atomic and molecular interactions as well as the densities of electrons and how they are transferred among molecules.

Introducing specific atoms or molecules to a material can change its conducting properties on a hyperlocal level. This allows a transistor made from doped material to serve a variety of roles in electronics, including routing currents to perform operations based on complex circuits or amplifying current to help produce sound in a guitar amplifier or radio.

Quantum laws govern interatomic and intermolecular interactions, in essence holding material together, and, in turn, structuring the world as we know it. In the team’s work, these complex interactions need to be calculated for individual atomic interactions, including interactions among semiconductor “host” molecules and dopant molecules on a larger scale.

The team uses density functional theory (DFT)–a computational method that can model electronic densities and properties during a chemical interaction–to efficiently predict the variety of complex interactions. It then collaborates with experimentalists from TU Dresden and the Institute for Molecular Science in Okazaki, Japan to compare its simulations to spectroscopy experiments.

“Electrical conductivity can come from many dopants and is a property that emerges on a much larger length scale than just interatomic forces,” Ortmann said. “Simulating this process needs more sophisticated transport models, which can only be implemented on high-performance computing (HPC) architectures.”

Goal!

To test its computational approach, the team simulated materials that already had good experimental datasets as well as industrial applications. The researchers first focused on C60, also known as Buckminsterfullerene.

Buckminsterfullerene is used in several applications, including solar cells. The molecule’s structure is very similar to that of a soccer ball–a spherical arrangement of carbon atoms arranged in pentagonal and hexagonal patterns the size of less than one nanometer. In addition, the researches simulated zinc phthalocyanine (ZnPc), another molecule that is used in photovoltaics, but unlike C60, has a flat shape and contains a metallic atom (zinc).

As its dopant the team first used a well-studied molecule called 2-Cyc-DMBI (2-cyclohexyl-dimethylbenzimidazoline). 2-Cyc-DMBI is considered an n-dopant, meaning that it can provide its surplus electrons to the semiconductor to increase its conductivity. N-dopants are relatively rare, as few molecules are “willing” to give away an electron. In most cases, molecules that do so become unstable and degrade during chemical reactions, which in this context can lead to an electronic device failure. 2-Cyc-DMBI dopants are the exception, because they can be sufficiently weakly attractive for electrons–allowing them to move over long distances–while also remaining stable after donating them.

The team got good agreement between its simulations and experimental observations of the same molecule-dopant interactions. This indicates that they can rely on simulation to guide predictions as they relate to the doping process of semiconductors. They are now working on more complex molecules and dopants using the same methods.

Despite these advances, the team recognizes that next-generation supercomputers such as SuperMUC-NG–announced in December 2017 and set to be installed in 2018–will help the researchers expand the scope of their simulations, leading to ever bigger efficiency gains in a variety of electronic applications.

“We need to push the accuracy of our simulations to the maximum,” Ortmann said. “This would help us extend the range of applicability and allow us to more precisely simulate a broader set of materials or larger systems of more atoms.”

Ortmann also noted that while current-generation systems allowed the team to gain insights in specific situations and prove its concept, there is still room to get better. “We are often limited by system memory or CPU power,” he said. “The system size and simulation’s accuracy are essentially competing for computing power, which is why it is important to have access to better supercomputers. Supercomputers are perfectly suited to deliver answers to these problems in a realistic amount of time.”

Historically, the DRAM market has been the most volatile of the major IC product segments.  A good example of this was displayed over the past two years when the DRAM market declined 8% in 2016 only to surge by 77% in 2017! The March Update to the 2018 McClean Report (to be released later this month) will fully detail IC Insights’ latest forecast for the 2018 DRAM and total IC markets.

In the 34-year period from 1978-2012, the DRAM price-per-bit declined by an average annual rate of 33%. However, from 2012 through 2017, the average DRAM price-per-bit decline was only 3% per year! Moreover, the 47% full-year 2017 jump in the price-per-bit of DRAM was the largest annual increase since 1978, surpassing the previous high of 45% registered 30 years ago in 1988!

In 2017, DRAM bit volume growth was 20%, half the 40% rate of increase registered in 2016.  For 2018, each of the three major DRAM producers (e.g., Samsung, SK Hynix, and Micron) have stated that they expect DRAM bit volume growth to once again be about 20%.  However, as shown in Figure 1, monthly year-over-year DRAM bit volume growth averaged only 13% over the nine-month period of May 2017 through January 2018.

Figure 1 also plots the monthly price-per-Gb of DRAM from January of 2017 through January of 2018.  As shown, the DRAM price-per-Gb has been on a steep rise, with prices being 47% higher in January 2018 as compared to one year earlier in January 2017.  There is little doubt that electronic system manufacturers are currently scrambling to adjust and adapt to the skyrocketing cost of memory.

DRAM is usually considered a commodity like oil.  Like most commodities, there is elasticity of demand associated with the product.  For example, when oil prices are low, many consumers purchase big SUVs, with little concern for the vehicle’s miles-per-gallon efficiency.  However, when oil prices are high, consumers typically look toward smaller or alternative energy (e.g., hybrid or fully electric) options.

Figure 1

Figure 1

While difficult to precisely measure, it is IC Insights’ opinion that DRAM bit volume usage is also affected by elasticity, whereby increased costs inhibit demand and lower costs expand usage and open up new applications.  As shown in Figure 1, the correlation coefficient between the DRAM price-per-bit and the year-over-year bit volume increase from January 2017 through January 2018 was a strong -0.88 (a perfect correlation between two factors moving in the opposite direction would be -1.0).  Thus, while system manufacturers are not scaling back DRAM usage in systems currently shipping, there have been numerous rumors of some smartphone producers scaling back DRAM in next-generation models (i.e., incorporating 4GB of DRAM per smartphone instead of 5GB).

In 2018, IC Insights believes that the major DRAM suppliers will be walking a fine line between making their shareholders even happier than they are right now and further alienating their customer base.  If, and it is a BIG if, the startup Chinese DRAM producers can field a competitive product over the next couple of years, DRAM users could flock to these new suppliers in an attempt to get out from under the crushing price increases now being thrust upon them—with the “payback” to the current major DRAM suppliers being severe.

The Semiconductor Industry Association (SIA), representing U.S. leadership in semiconductor manufacturing, design, and research, today announced worldwide sales of semiconductors reached $37.6 billion for the month of January 2018, an increase of 22.7 percent compared to the January 2017 total of $30.6 billion. Global sales in January were 1.0 percent lower than the December 2017 total of $38.0 billion, reflecting normal seasonal market trends. All monthly sales numbers are compiled by the World Semiconductor Trade Statistics (WSTS) organization and represent a three-month moving average.

“After notching its highest-ever annual sales in 2017, the global semiconductor industry is off to a strong and promising start to 2018, posting its highest-ever January sales and 18th consecutive month of year-to-year sales increases,” said John Neuffer, president and CEO, Semiconductor Industry Association. “All major regional markets saw double-digit growth compared to last year, with the Americas leading the away with year-to-year growth of more than 40 percent. With year-to-year sales also up across all major semiconductor product categories, the global market is well-positioned for a strong start to 2018.”

Year-to-year sales increased substantially across all regions: the Americas (40.6 percent), Europe (19.9 percent), Asia Pacific/All Other (18.6 percent), China, (18.3 percent), and Japan (15.1 percent). Month-to-month sales increased slightly in Europe (0.9 percent), held flat in China, but fell somewhat in Asia Pacific/All Other (-0.6 percent), Japan (-1.0 percent), and the Americas (-3.6 percent).

To find out how to purchase the WSTS Subscription Package, which includes comprehensive monthly semiconductor sales data and detailed WSTS Forecasts, please visit http://www.semiconductors.org/industry_statistics/wsts_subscription_package/. For detailed data on the global and U.S. semiconductor industry and market, consider purchasing the 2017 SIA Databook: https://www.semiconductors.org/forms/sia_databook/.

Jan 2018

Billions

Month-to-Month Sales                              

Market

Last Month

Current Month

% Change

Americas

8.95

8.63

-3.6%

Europe

3.37

3.40

0.9%

Japan

3.24

3.21

-1.0%

China

12.01

12.01

0.0%

Asia Pacific/All Other

10.41

10.35

-0.6%

Total

37.99

37.59

-1.0%

Year-to-Year Sales                         

Market

Last Year

Current Month

% Change

Americas

6.14

8.63

40.6%

Europe

2.84

3.40

19.9%

Japan

2.79

3.21

15.1%

China

10.16

12.01

18.3%

Asia Pacific/All Other

8.73

10.35

18.6%

Total

30.64

37.59

22.7%

Three-Month-Moving Average Sales

Market

Aug/Sep/Oct

Nov/Dec/Jan

% Change

Americas

8.54

8.63

1.1%

Europe

3.36

3.40

1.1%

Japan

3.20

3.21

0.3%

China

11.65

12.01

3.1%

Asia Pacific/All Other

10.33

10.35

0.1%

Total

37.09

37.59

1.4%

Thanks to a sudden increase in demand , shipment revenue of flexible active-matrix organic light-emitting diode (AMOLED) displays more than tripled in 2017, accounting for 54.6 percent of total AMOLED panel shipment revenue, according to business information provider IHS Markit (Nasdaq: INFO).

The flexible AMOLED panel market expanded by about 250 percent in 2017 to $12 billion from $3.5 billion in 2016, while rigid AMOLED panel shipment revenue contracted by 14 percent during the same period. Samsung Display started supplying its flexible AMOLED displays for the iPhone X in the third quarter of 2017, which greatly contributed to the overall shipment revenue increase. LG Display, BOE and Kunshan Govisionox Optoelectronics also started producing flexible AMOLED panels for smartphones and smartwatches in 2017, helping the market growth.

“High-end smartphone brands have increasingly applied flexible AMOLED panels to their products for unique and special design,” said Jerry Kang, senior principal analyst at IHS Markit. “The number of flexible AMOLED panel suppliers is also increasing, but the supplying capacity is still concentrated in Samsung Display.”

The flat type flexible AMOLED panels accounted for about a half of total flexible AMOLED shipment units in 2017, shifting from the curved type that used to be the major flexible AMOLED display form factor until 2016.

“As Apple applied the flat type to the iPhone X, the form factor of smartphone displays has diversified,” Kang said.

According to the latest AMOLED & Flexible Display Intelligence Service by IHS Markit, the demand for flexible AMOLED panels is not expected to grow as fast as supply capacity in 2018. “In a way to overcome potential oversupply, many panel makers are trying to develop another innovative form factor, such as foldable or rollable, within a few years,” Kang said.

02.27.18_Shipment_revenue_of_AMOLED_panels

Each year, Solid State Technology turns to industry leaders to hear viewpoints on the technological and economic outlook for the upcoming year. Read through these expert opinions on what to expect in 2018.

Enabling the AI Era with Materials Engineering

Screen Shot 2018-03-05 at 12.24.49 PMPrabu Raja, Senior Vice President, Semiconductor Products Group, Applied Materials

A broad set of emerging market trends such as IoT, Big Data, Industry 4.0, VR/AR/MR, and autonomous vehicles is accelerating the transformative era of Artificial Intelligence (AI). AI, when employed in the cloud and in the edge, will usher in the age of “Smart Everything” from automobiles, to planes, factories, buildings, and our homes, bringing fundamental changes to the way we live

Semiconductors and semiconductor processing technol- ogies will play a key enabling role in the AI revolution. The increasing need for greater computing perfor- mance to handle Deep Learning/Machine Learning workloads requires new processor architectures beyond traditional CPUs, such as GPUs, FPGAs and TPUs, along with new packaging solutions that employ high-density DRAM for higher memory bandwidth and reduced latency. Edge AI computing will require processors that balance the performance and power equation given their dependency on battery life. The exploding demand for data storage is driving adoption of 3D NAND SSDs in cloud servers with the roadmap for continued storage density increase every year.

In 2018, we will see the volume ramp of 10nm/7nm devices in Logic/Foundry to address the higher performance needs. Interconnect and patterning areas present a myriad of challenges best addressed by new materials and materials engineering technologies. In Inter- connect, cobalt is being used as a copper replacement metal in the lower level wiring layers to address the ever growing resistance problem. The introduction of Cobalt constitutes the biggest material change in the back-end-of-line in the past 15 years. In addition to its role as the conductor metal, cobalt serves two other critical functions – as a metal capping film for electro- migration control and as a seed layer for enhancing gapfill inside the narrow vias and trenches.

In patterning, spacer-based double patterning and quad patterning approaches are enabling the continued shrink of device features. These schemes require advanced precision deposition and etch technologies for reduced variability and greater pattern fidelity. Besides conventional Etch, new selective materials removal technologies are being increasingly adopted for their unique capabilities to deliver damage- and residue-free extreme selective processing. New e-beam inspection and metrology capabilities are also needed to analyze the fine pitch patterned structures. Looking ahead to the 5nm and 3nm nodes, placement or layer-to-layer vertical alignment of features will become a major industry challenge that can be primarily solved through materials engineering and self-aligned structures. EUV lithography is on the horizon for industry adoption in 2019 and beyond, and we expect 20 percent of layers to make the migration to EUV while the remaining 80 percent will use spacer multi- patterning approaches. EUV patterning also requires new materials in hardmasks/underlayer films and new etch solutions for line-edge-roughness problems.

Packaging is a key enabler for AI performance and is poised for strong growth in the coming years. Stacking DRAM chips together in a 3D TSV scheme helps bring High Bandwidth Memory (HBM) to market; these chips are further packaged with the GPU in a 2.5D interposer design to bring compute and memory together for a big increase in performance.

In 2018, we expect DRAM chipmakers to continue their device scaling to the 1Xnm node for volume production. We also see adoption of higher perfor- mance logic technologies on the horizon for the periphery transistors to enable advanced perfor- mance at lower power.

3D NAND manufacturers continue to pursue multiple approaches for vertical scaling, including more pairs, multi-tiers or new schemes such as CMOS under array for increased storage density. The industry migration from 64 pairs to 96 pairs is expected in 2018. Etch (high aspect ratio), dielectric films (for gate stacks and hardmasks) along with integrated etch and CVD solutions (for high aspect ratio processing) will be critical enabling technologies.

In summary, we see incredible inflections in new processor architectures, next-generation devices, and packaging schemes to enable the AI era. New materials and materials engineering solutions are at the very heart of it and will play a critical role across all device segments.

As the world of advanced manufacturing enters the sub-nanometer scale era, it is clear that ALD, MLD and SAM represent viable options for delivering the required few-atoms-thick layers required with uniformity, conformality, and purity.

BY BARRY ARKLES, JONATHAN GOFF, Gelest Inc., Morrisville PA; ALAIN E. KALOYEROS, SUNY Polytechnic Institute, Albany, NY

Device and system technologies across several industries are on the verge of entering the sub-nanometer scale regime. This regime requires processing techniques that enable exceptional atomic level control of the thickness, uniformity, and morphology of the exceedingly thin (as thin as a few atomic layers) film structures required to form such devices and systems.[1]

In this context, atomic layer deposition (ALD) has emerged as one of the most viable contenders to deliver these requirements. This is evidenced by the flurry of research and devel- opment activities that explore the applicability of ALD to a variety of material systems,[2,3] as well as the limited introduction of ALD TaN in full-scale manufacturing of nanoscale integrated circuitry (IC) structures.[4] Both the success and inherent limitations of ALD associated with repeated dual-atom interactions have stimulated great interest in additional self-limiting deposition processes, particularly Molecular Layer Deposition (MLD) and Self- Assembled Monolayers (SAM). MLD and SAM are being explored both as replacements and extensions of ALD as well as surface modification techniques prior to ALD.[5]

ALD is a thin film growth technique in which a substrate is exposed to alternate pulses of source precursors, with intermediate purge steps typically consisting of an inert gas to evacuate any remaining precursor after reaction with the substrate surface. ALD differs from chemical vapor deposition (CVD) in that the evacuation steps ensure that the different precursors are never present in the reaction zone at the same time. Instead, the precursor doses are applied as successive, non-overlapping gaseous injections. Each does is followed by an inert gas purge that serves to remove both byproducts and unreacted precursor from the reaction zone.

The fundamental premise of ALD is based on self-limiting surface reactions, wherein each individual precursor-substrate interaction is instantaneously terminated once all surface reactive sites have been depleted through exposure to the precursor. For the growth of binary materials, each ALD cycle consists of two precursor and two purge pulses, with the thickness of the resulting binary layer per cycle (typically about a monolayer) being determined by the precursor-surface reaction mode. The low growth rates associated with each ALD cycle enable precise control of ultimate film thickness via the application of repeated ALD cycles. Concurrently, the self-limiting ALD reaction mechanisms allow excellent conformality in ultra-high-aspect-ratio nanoscale structures and geometries.[6]

A depiction of an individual ALD cycle is shown in FIGURE 1. In Fig. 1(a), a first precursor A is introduced in the reaction zone above the substrate surface.

Screen Shot 2018-03-01 at 3.03.03 PM

Precursor A then adsorbs intact or reacts (partially) with the substrate surface to form a first monolayer, as shown in Fig. 1(b), with any excess precursor and potential byproducts being evacuated from the reaction zone through a subsequent purge step. In Fig. 1(d), a second precursor Y is injected into the reaction zone and is made to react with the first monolayer to form a binary atomic layer on the substrate surface, as displayed in Fig. 1(e). Again, all excess precursors and reaction byproducts are flushed out with a second purge step 1(f). The entire process is performed repeatedly to achieve the targeted binary film thickness.

In some applications, a direct or remote plasma is used as an intermediate treatment step between the two precursor-surface interactions. This treatment has been reported to increase the probability of surface adsorption by boosting the number of active surface sites and lowering the reaction activation energy. As a result, such treatment has led to increased growth rates and reduce processing temperatures.[7]

A number of benefits have been cited for the use of ALD, including high purity films, absence of particle contami- nation and pin-holes, precise control of thickness at the atomic level, excellent thickness uniformity and step coverage in complex via and trench topographies, and the ability to grow an extensive array of binary material systems. However, issues with surface roughness and large surface grain morphology have also been reported. Another limitation of ALD is the fact that it is primarily restricted to single or binary material systems. Finally, extremely slow growth rates continue to be a challenge, which could potentially restrict ALD’s applicability to exceptionally ultrathin films and coatings.

These concerns have spurred a renewed interest in other molecular level processing technologies that share the self-limiting surface reaction characteristics of ALD. Chief among them are MLD and SAM. MLD refers principally to ALD-like processes that also involve successive precursor-surface reactions in which the various precursors never cross paths in the reaction zone. [8] However, while ALD is employed to grow inorganic material systems, MLD is mainly used to deposit organic molecular films. It should be noted that this definition of MLD, although the most common, is not yet universally accepted. An alternative characterization refers to MLD as a process for the growth of organic molecular components that may contain inorganic fragments, yet it does not exhibit the self-limiting growth features of ALD or its uniformity of film thickness and step coverage.[2]

A depiction illustrating a typical MLD cycle, according to the most common definition, is shown in FIGURE 2. In Fig. 2(a), a precursor is introduced in the reaction zone above the substrate surface. Precursor C adsorbs to the substrate surface and is confined by physisorption (Fig. 2(b)). The precursor then undergoes a quick chemisorption reaction with a significant number of active surface sites, leading to the self-limiting formation of molecular attachments in specific assemblies or regularly recurring structures, as displayed in Fig. 2(c). These structures form at significantly lower process temperatures compared to traditional deposition techniques.

Screen Shot 2018-03-01 at 3.03.09 PM

To date, MLD has been successfully applied to grow exceptionally thin films for applications as organic, inorganic, and hybrid organic-inorganic dielectrics and polymers for IC applications; [1,9] nanoprobes for in-vitro imaging and interrogation of biological cells; [10] photoluminescent devices; [7] and lithium-ion battery electrodes.[11]

SAM is a deposition technique that involves the spontaneous adherence of organized organic structures on a substrate surface. Such adherence takes place through adsorption from the vapor or liquid phase through relatively weak interactions with the substrate surface. Initially, the structures are adsorbed on the surface by physisorption through, for instance, van der Waals forces or polar interactions. Subsequently, the self-assembled monolayers become slowly confined by a chemisorption process, as depicted in FIGURE 3.

Screen Shot 2018-03-01 at 3.03.18 PM

The ability of SAM to grow layers as thin as a single molecule through chemisorption-driven interactions with the substrate has triggered enthusiasm for its potential use in the formation of “near-zero-thickness” activation or barrier layers. It has also sparked interest in its appli- cability to area-selective or area-specific deposition. Molecules can be directed to exhibit preferential reactions with specific segments of the underlying substrate rather than others to facilitate or obstruct subsequent material growth. This feature makes SAM desirable for incorpo- ration in area-selective ALD (AS-ALD) or CVD (AS-CVD), where the SAM-formed layer would serve as a foundation or blueprint to drive AS-ALD or AS-CVD. [12,13]

To date, SAM has been effectively employed to form organic layers as thin as a single molecule for applications as organic, inorganic, and hybrid organic-inorganic dielec- trics; polymers for IC applications; [13,14] encapsulation and barrier layers for IC metallization; [15] photoluminescent devices; [5] molecular and organic electronics; [16] and liquid crystal displays.[17]

As the world of advanced manufacturing enters the sub-nanometer scale era, it is clear that ALD, MLD and SAM represent viable options for delivering the required few-atoms-thick layers required with uniformity, conformality, and purity. By delivering the constituents of the material systems individually and sequentially into the processing environment, and precisely controlling the resulting chemical reactions with the substrate surface, these techniques enable excellent command of processing parameters and superb management of the target specifications of the resulting films. In order to determine whether one or more ultimately make it into full-scale manufacturing, a great deal of additional R&D is required in the areas of understanding and establishing libraries of fundamental interactions, mechanisms of source chemistries with various substrate surfaces, engineering viable solutions for surface smoothness and rough morphology, and developing protocols to enhance growth rates and overall throughput.

References

1. Belyansky, M.; Conti, R.; Khan, S.; Zhou, X.; Klymko, N.; Yao, Y.; Madan, A.; Tai, L.; Flaitz, P.; Ando, T. Silicon Compat. Mater. Process. Technol. Adv. Integr. Circuits Emerg. Appl. 4 2014, 61 (3), 39–45.
2. George, S. M.; Yoon, B. Mater. Matters 2008, 3 (2), 34–37. 3. George, S. M.; Yoon, B.; Dameron, A. A. Acc. Chem. Res.
2009, 42 (4), 498–508.
4. Graef, E.; Huizing, B. International Technology Roadmap for
Semiconductors 2.0, 2015th ed.; 2015.
5. Kim, D.; Zuidema, J. M.; Kang, J.; Pan, Y.; Wu, L.; Warther, D.; Arkles, B.; Sailor, M. J. J. Am. Chem. Soc. 2016, 138 (46),
15106–15109.
6. George, S. M. Chem. Rev. 2010, 110 (1), 111–131.
7. Provine, J.; Schindler, P.; Kim, Y.; Walch, S. P.; Kim, H. J.; Kim,
K. H.; Prinz, F. B. AIP Adv. 2016, 6 (6).
8. Räupke, A.; Albrecht, F.; Maibach, J.; Behrendt, A.; Polywka,
A.; Heiderhoff, R.; Helzel, J.; Rabe, T.; Johannes, H.-H.; Kowalsky, W.; Mankel, E.; Mayer, T.; Görrn, P.; Riedl, T. 226th Meet. Electrochem. Soc. (2014 ECS SMEQ) 2014, 64 (9), 97–105.
9. Fichtner, J.; Wu, Y.; Hitzenberger, J.; Drewello, T.; Bachmann, J. ECS J. Solid State Sci. Technol. 2017, 6 (9), N171–N175.
10. Culic-Viskota, J.; Dempsey, W. P.; Fraser, S. E.; Pantazis, P. Nat. Protoc. 2012, 7 (9), 1618–1633.
11. Loebl, A. J.; Oldham, C. J.; Devine, C. K.; Gong, B.; Atanasov, S. E.; Parsons, G. N.; Fedkiw, P. S. J. Electrochem. Soc. 2013, 160 (11), A1971–A1978.
12. Sundaram, G. M.; Lecordier, L.; Bhatia, R. ECS Trans. 2013, 58 (10), 27–37.
13. Kaufman-Osborn, T.; Wong, K. T. Self-assembled monolayer blocking with intermittent air-water exposure. US20170256402 A1, 2017.
14. Arkles, B.; Pan, Y.; Kaloyeros, A. ECS Trans. 2014, 64 (9), 243–249.
15. Tan, C. S.; Lim, D. F. In ECS Transactions; 2012; Vol. 50, pp 115–123.
16. Kong, G. D.; Yoon, H. J. J. Electrochem. Soc. 2016, 163 (9), G115–G121.
17. Wu, K. Y.; Chen, W. Y.; Wang, C.-H.; Hwang, J.; Lee, C.-Y.; Liu, Y.-L.; Huang, H. Y.; Wei, H. K.; Kou, C. S. J. Electrochem. Soc. 2008, 155 (9), J244.

BY RYAN PEARMAN, D2S, Inc., San Jose, CA

There are big changes on the horizon for semiconductor mask manufacturing, including the imminent first production use of multi-beam mask writers, and the preparation of all phases of semiconductor manufacturing for the introduction of extreme ultra-violet (EUV) lithography within the next few years. These changes, along with the increasing use of multiple patterning and inverse- lithography technology (ILT) with 193i lithography, are driving the need for more detailed and more accurate modeling for mask manufacturing.

New solutions bring new mask modeling challenges

Both EUV and multi-beam mask writing provide solutions to many long-standing challenges for the semiconductor industry. However, they both create new challenges for mask modeling as well. Parameters once considered of negligible impact must be added to mask models targeted for use with EUV and/or multi-beam mask writers. In particular, the correct treatment of dose profiles has emerged as a critical component for mask models targeting these new technologies. This is in addition to scattering effects, such as the well-known EUV mid-range scatter, that must be included in mask models to accurately predict the final mask results. Gaussian models, which form the basis for most traditional mask models, will not be sufficient as many of these new parameters are more properly represented with arbitrary point-spread functions (PSFs).

The most obvious – and most desperately needed – benefit of EUV lithography is greater accuracy due to its enhanced resolution. However, this benefit comes along with a mask-making challenge: wafer-printing defects due to mask errors will appear more readily because of this enhanced resolution. Therefore, the introduction of EUV will require the mean-to-target (MTT) variability on photomasks to become smaller. From a mask manufacturability perspective, all sources of printing errors, systematic and random, must be improved. This means that mask models must also be more accurate, not only in predicting measurements, but also in predicting variability.

A well-known challenge for EUV mask modeling is the EUV mid-range scatter effect. The more complex topology of EUV masks leads to broader scattering effects. In addition to “classical” forward- and back-scatter effects, which dominate 193i lithography, there is a mid-range (1μm) scatter that now requires modeling. This phenomenon is non-Gaussian in nature, so cannot be simulated accurately with simple Gaussian (“1G”) models. In combination with better treatment of resist effects, a PSF-based model is a much better represen- tation of the critical lithography process.

The eagerly anticipated introduction of EUV will demand a lower-sensitivity resist to be used for EUV masks due to the smaller size of EUV features. This is one of the reasons why multi-beam mask writers have emerged as the replacement for variable shaped beam (VSB) tools for the next generation of mask writers. Slower resists require higher currents, and VSB tools today are limited thermally in ways the massively parallel multi-beam tools are not. In addition to thermal effects, VSB mask writers are runtime-limited by shot count; we are already approaching the practical limit for many advanced masks. Shot count is only expected to grow in the future as pitches shrink and complex small features become prevalent in EUV masks – and even in 193i masks due to increased use of ILT to improve process windows for 193i lithography.

In contrast to VSB mask writers, which use shaped apertures to project the shapes (usually rectangles) created by optical-proximity correction (OPC) onto the mask, multi-beam mask writers rasterize the desired mask shapes into a field of pixels, each of which are written by one of hundreds of thousands of individual beamlets (FIGURE 1). This enables multi-beam mask writers to write masks in constant time, no matter how complicated the mask shapes. Each of these beamlets can be turned on and off independently to create the desired eBeam input, which enables the fine resolution of smaller shapes. However, it also means that the dose profiles for the multi-beam writers are far more complex, leading to the need for more advanced, separable dose and shape modeling.

Screen Shot 2018-03-01 at 1.40.16 PM

Since the beamlets of a multi-beam tool are smaller than the primary length-scale of the dose blur, a key second advantage of multi-beam writers emerges: the patterns written are intrinsically curvilinear. In contrast, VSB mask writers can only print features with limited shapes – principally rectangular and 45-degree diagonals, although some tools enable circular patterns. The critical process-window enhancements for ILT also rely on curvilinear mask shapes, so a synergy appears: better treatment of curved edges at the mask writing step will lead to better wafer yield.

Dose and shape: New requirements for multi- beam and EUV mask models

Multi-beam mask writers, EUV masks, and even the proliferation of ILT will require mask models to change substantially. Until very recently, curvi-linear mask features have been ignored when characterizing masks, and models, when used, have assumed simplicity. Primary electron blur (“forward scattering”), including chemically amplified resist (CAR) effects, historically have been assumed to be a set of Gaussians, with length scales between 15nm and 300nm. All other effects of the mask making processes – long-range electron scattering (“back-scatter” and “fogging”), electron charging, devel- opment, and plasma-etching effects – have either been assumed to be constant regardless of mask shape or the dose applied, or have been accounted for approxi- mately by inline corrections in the exposure tool.

To meet the challenges posed by both EUV and multi- beam writing – especially since they are likely to be employed together – mask models will need to treat dose and shape separately, and to explicitly account for the various scattering, fogging, etch, and charging effects (FIGURE 2).

Screen Shot 2018-03-01 at 1.40.27 PM

When masks were written entirely at nominal dose, dose-based effects could be handled together with shape-based effects as a single term. Several years ago, overlapping shots were introduced by D2S for VSB tools to both improve margins and reduce shot-count for complex mask shapes. At this time, it became clear that dose modulation (including overlapping shots) required specific modeling. Some effects (like etch) varied only with respect to the resist contour shapes, while other print bias effects were based on differences in exposure slope near the contour edge. For all the complexity of VSB overlapping shots, all identical patterns were guaranteed to print in the same way. Today, with multi-beam writers, there are significant translational differences in features due to dose-profile changes as they align differently with the multi-beam pixel grid.

We discussed earlier that multi-beam tools print curvi-linear shapes. We should point out that even Manhattan designs become corner-rounded on the actual masks at line ends, corners, and jogs. Why? Physics is almost never Manhattan, and treating it as such will be inaccurate, as in the case of etching effects computed in the presence of Manhattan jogs. We need to embrace the fact that all printed mask shapes will be curvilinear and ensure that any shape-based simulation is able to predict effects at all angles, not just 0 and 90.

Increasing mask requirements drive the need for mask model accuracy

As we continue to move forward to more advanced processes with ever-smaller feature sizes, the requirement for better accuracy increases. There is quite literally less room for any defects. This increased emphasis on accuracy and precision is what drives the adoption of new technologies such as EUV and multi-beam mask writing; it drives the increased need for better model performance as well.

We have already discussed several model parameters that will need to be re-evaluated and handled differently in order to achieve greater accuracy. Accuracy also requires a more rigorous approach to the calibration and validation of models with test chips that isolate specific physics effects with specific test structures. For example, masks that include complex shapes require 2D validation. Today’s VSB mask writers are Manhattan (1D) writing instruments, so models built using these tools are by definition 1D-centric. Inaccuracies in 1D models are exacerbated when tested against a 2D validation. Physics-based models are far more likely to extrapolate to 2D shapes, and are better for ILT.

As features shrink, the accuracy of individual shapes on the mask is impacted increasingly by their proximity to other shapes. The context for each shape on the mask becomes as important as the shape itself. The solution is to model each shape within the context of its surroundings. This is driving the need for simulation-based modeling and mask-correction methodologies.

GPU acceleration: Making simulation-based mask modeling practical

Historically, simulation-based processing of mask models resulted in unacceptably long simulation runtimes. The most common approach until recently has been to use model-based or rules-based methodologies that, while providing less accuracy, result in faster runtimes. The advent of GPU-accelerated mask simulation has changed this picture. GPU acceleration is particularly suited to “single instruction, multiple data” (SIMD) computing, which makes it a very good fit for simulation of physical phenomena, and enables full- reticle mask simulation within reasonable runtimes.

An additional advantage of GPU acceleration is the ability to employ PSFs without runtime impact (FIGURE 3). As we’ve already discussed, PSFs are a natural choice for the mask-exposure model, including EUV mask mid-range scattering effects, forward-scattering details, and modeling back-scattering by construction. Using PSFs, any dose effect of any type can be exactly modeled during simulation-based processing.

Screen Shot 2018-03-01 at 1.40.40 PM

GPU acceleration opens the door for simulation-based correction of a multitude of complex mask effects based on physics-based models, affording practical simulation run times for these more complex models.

PLDC: New mask models at work in multi-beam mask writers

As with any big changes to the semiconductor manufacturing process, the industry has been preparing for EUV and multi-beam mask writing for several years. These preparations have required various members of the supply chain to work together to deploy effective solutions. One example of this collaboration in the mask-modeling realm is the introduction by NuFlare Technology of pixel-level dose correction (PLDC) in its MBM-1000 multi-beam mask writer. At the 2017 SPIE Photomask Japan conference, NuFlare and D2S jointly presented a paper [2] detailing the mask modeling – and GPU acceleration – used in this new inline mask correction.

PLDC manipulates the dose of pixels to perform short- range (effects in the 10nm scale to 3-5μm scale) linearity correction while improving the overall printability of the mask. In addition to the traditional four-Gaussian (4G) PEC model, PLDC combines for the first time an inline 10nm-100nm short-range linearity correction with a 1μm scale mid-range linearity correction (FIGURE 4). This mid-range correction is particularly useful for EUV mid-range scatter correction.

Screen Shot 2018-03-01 at 1.40.49 PM

The dose-based effects portion of the D2S mask model, TrueModel, are expressed as a PSF for an interaction range up to 3-5μm, and with a 4G PEC model for interaction range up to 40-50μm. Being able to express any arbitrary PSF as the correction model allows smoothing of “shoulders” that are often present on multiple Gaussian models, and allows proper modeling of effects that are not fundamentally Gaussian in nature (such as the EUV mid-range scatter). This ability to model physical effects and correct for them inline with mask writing results in more accurate masks, including for smaller EUV shapes and for curvilinear ILT mask shapes.

PLDC is simulation-based, so it has the ability to be very accurate regardless of targeted shape, regardless of mask type (e.g., positive, negative EUV, ArF, NIL master) with the right set of mask modeling parameters.

GPU acceleration enables fast computing of PSF convo- lutions for all dose-based effects up to 3-5μm range, performed inline in the MBM-1000, which helps to maintain turnaround time in the mask shop.

Conclusions

Mask models need some significant adaptations to meet the coming challenges. The new EUV/multi-beam mask writer era will require mask models to be more detailed and more accurate. More complex dose profiles and more complex electron scattering require PSFs be added to the industry-standard Gaussian models. More rigorous mask models with specific dose and specific shape effects are now needed. Simulation-based mask processing, made practical by GPU acceleration, is necessary to take context-based mask effects into account.

The good news is that the mask industry has been preparing for these changes for several years and stands ready with solutions to the challenges posed by these new technologies. Big changes are coming to the mask world, and mask models will be ready.

References

1. Pearman, Ryan, et al, “EUV modeling in the multi-beam mask writer era,” SPIE Photomask Japan, 2017.

2. “GPU-accelerated inline linearity correction: pixel-level dose correction (PLDC) for the MBM-1000,” Zable, Matsumoto, et al, SPIE Photomask Japan, 2017.

BY SYAHIRAH MD ZULKIFLI, BERNICE ZEE AND WEN QIU, Advanced Micro Devices, Singapore; ALLEN GU, ZEISS, Pleasanton, CA

3D integration and packaging has challenged failure analysis (FA) techniques and workflows due to the high complexity of multichip architectures, the large variety of materials, and small form factors in highly miniaturized devices [1]. The drive toward die stacking with High Bandwidth Memory (HBM) allows the ability to move higher bandwidth closer to the CPU and offers an oppor- tunity to significantly expand memory capacity and maximize local DRAM storage for high throughput in the data center. However, the integration of HBM results in more complex electrical communications, due to the emerging use of a physical layer (PHY) design to connect the chip and subsystems. FIGURE 1 shows the schematic of a 2.5D stacked die package designed so that some HBM μbumps are electrically connected to the main CPU through a PHY connection. In general, the HBM and CPU signal length needs to be minimized to reduce drive strength requirements and power consumption at the PHY.

Screen Shot 2018-03-01 at 11.46.34 AM

This requirement poses new challenges in FA fault isolation. A traditional FA workflow using electrical fault isolation (EFI) techniques to isolate the defect becomes less effective for chip-to-chip interconnects because there are no BGA balls for electrically probing the μbumps at the PHY. As a result, new defect localization techniques and FA flows must be investigated.

XRM theory

X-ray imaging is widely employed for non-destructive FA inspection because it can explore interior structures of chips and packages, such as solder balls, silver paste and lead frames. Thus, many morphological failures, such as solder-ball crack/burn-out and bumping failure inside IC packages, can be imaged and analyzed through X-ray tools. In 2D X-ray inspection, an X-ray irradiates samples and a 2D detector utilizes the projection shadow to construct 2D images. This technique, however, is not adequate for revealing true 3D structures since it projects 3D structures onto a 2D plane. As a result, important information, such as internal faulty regions of electronic packages, may remain hidden. This disadvantage can be overcome by using 3D X-ray microscopic technology, derived from the original computed tomography (CT) technique. In a 3D imaging system, a series of 2D X-ray images are captured at different angles while a sample rotates.

These 2D images are used to reconstruct 3D X-ray tomographic slices using mathematic models and algorithms. The spatial resolution of the imaging technique can be improved through the integration of an optical microscopy system. This improved technology is called 3D X-ray microscopy (XRM) [2]. FIGURE 2 shows an example 3D XRM image for a stacked die. The image clearly shows the internal structures – including the TSV, C4 bumps and μbump of the electronic components – without physically damaging or altering the sample. The high resolution and quality shown here are essential to inspect small structural defects inside electronic devices. With its non-destructive nature, 3D XRM has been useful for non-destructive FA for IC packaging devices.

Screen Shot 2018-03-01 at 11.46.42 AM

Failure analysis approach

The purpose of an FA workflow is to have a sequence of analytical techniques that can help to effectively and quickly isolate the failure and determine the root cause. Typical FA workflows for flip-chip devices consist of non-destructive techniques such as C-Mode scanning acoustic microscopy (C-SAM) and time domain reflectometry (TDR) to isolate the failure, followed by destructive physical failure analysis (PFA). However, there are limitations to each of these techniques when posed with the failure analysis of a more complex stacked die package.

C-SAM allows the inspection of abnormal bumps, delamination and any mechanical failure. A focused soundwave is directed from a transducer to a small point on a target object and is reflected when it encounters a defect, inhomogeneity or a boundary inside the material. The transducer transforms the reflected sound pulses into electromagnetic pulses, which are displayed as pixels with defined grey values thereby creating an image [3]. However, stacked die composed of a combination of multiple thin layers may complicate C-SAM analysis. This is because the thin layers have smaller spacing between the adjacent interface, and shorter delay times for ultrasound traveling from one interface to another. Therefore, failures between the die and die attach may not be easily detected, and false readings may even be expected.

TDR is an electrical fault isolation tool that enables failure localization through electrical signal data. The TDR signal carries the impedance load information of electrical circuitry; hence, the reflected signals show the discontinuity location that has caused the mismatch of impedance. In-depth theory on TDR is further discussed in Chin et al [4]. However, TDR can only estimate where the failure lies, whether it is in the substrate, die or interposer region. To pin point the exact location within the area of failure is difficult, due to limitations in separating the various small structures through the TDR signal. Additionally, some of the pulse power is reflected for every impedance change, posing challenges regarding unique defect isolation and signal complexity – especially for stacked die [5]. In cases where the failure pins reside in the HBM μbump region, no BGA ball out is available to probe and send an electrical pulse through.

Physical Failure Analysis (PFA) is a destructive method to find and image the failure once non-destructive fault isolation is complete. PFA can be done both mechanically and by focused ion beam (FIB). For stacked dies, FIB is predominantly used to image smaller interconnect structures such as TSVs and μbumps. However, the drawback is that the success of documenting the failure through PFA is largely dependent on how well the non-destructive FA techniques can isolate the failure region. Without good clear fault isolation direction, the failure region might be destroyed or missed during the PFA process, and thus no root cause can be derived.

The integration of XRM into the FA flow can help to overcome the limitations of the various analysis techniques to isolate the failure. It is a great advantage to image small structures and failures with the high spatial resolution and contrast provided by XRM and without destroying the sample. For failures in stacked die, XRM can be integrated into the FA flow for further fault isolation with high accuracy. The visualization of defects and failed material prior to destructive analysis increases FA success rates. However, the trade-off for imaging small defects at high resolution is time. For stacked die failures, C-SAM and TDR can first be performed to isolate the region of failure. With a known smaller region of interest to focus on, the time taken for XRM to visualize the area at high resolution is significantly reduced.

In cases where failures are identified in the HBM μbump, XRM is an effective technique to isolate the failure through 3D defect visualization. With the failure region isolated, XRM can then act as a guide to perform further PFA. Following are three case studies where XRM was used to image HBM packages with stacked dies.

Case studies

In the first case study, we explore the application of XRM as the primary means of defect visualization where other non-destructive testing and FA techniques are not possible. An open failure was reported for non-underfilled stacked die packages during a chip package interaction (CPI) study. The suspected open location was within the μbump joints at the HBM stack/ interposer interface. The initial approach exposed the bottom-most die of the HBM stack, followed by FIB cross-sectioning at the specified location. Performing the destructive approach to visualize the integrity of μbump joints in non-underfilled stack die packages was virtually impossible due to the fragility of silicon. The absence of underfill (UF) means that the HBM does not properly adhere to the interposer and is susceptible to peel off. In addition, there was no medium to release shear stresses experienced by the μbump joints upon bending stresses, which could not be absorbed by the package. As seen in FIGURE 3, parallel lapping of the HBM stack without UF caused die crack and peeling.

Screen Shot 2018-03-01 at 11.46.50 AM

Consequently, to avoid aggravating the damage on the sample, 3D XRM was performed to inspect and visualize the suspected location using a 0.7μm/voxel and 4X objective without any sample preparation. FIGURE 4 shows an example virtual slice where the micro-cracks throughout the row of μbump joints are visualized. The micro-cracks are measured a few microns wide. It is worth noting that the micro-cracks were visible with a short scan time of 1.5 hrs.

Screen Shot 2018-03-01 at 11.47.00 AM

With the critical defect information in 3D, PFA was performed on a sample that was underfilled to facilitate ease of sample preparation. SEM images in FIGURE 5 validated the existence of μbump micro-cracks observed by 3D XRM inspection.

In the second case study, the 3D XRM technique was applied to a stacked die package with a failure at a specific HBM/XPU physical interface (PHY) μbump connection. This μbump connection provides specific communication between the HBM stack and XPU die, and there is no package BGA ball out to enable electrical probing. Accordingly, it was not possible to verify if the failure type was an open or short. In addition, there was no means to determine if the failure was at the HBM or XPU die. Since defects from previous lots were open failures at the PHY μbump of the HBM, 3D XRM was performed at the suspected HBM open region using a 0.85μm/voxel and 4X objective.

As no defect was observed, XRM was then applied to the corresponding XPU PHY μbump. Contrary to the anticipated μbump open, a short was observed between two μbumps as shown in FIGURES 6a and 6b.

Screen Shot 2018-03-01 at 11.47.22 AM Screen Shot 2018-03-01 at 11.47.28 AM

 

The μbump short resulted from a solder extrusion bridging two adjacent μbumps. If 3D XRM had not been performed, a blind physical cross-section likely would have been performed on the initially suspected open region. As a result, the actual failure region may have been missed and/or destroyed.

In the final case study, an open failure was reported at a signal pin of a stack die package. As per the traditional FA flow, C-SAM and TDR techniques were applied to isolate the fault. C-SAM results showed an anomaly, and TDR suggested an open in the substrate as demonstrated in FIGURE 7a and 7b respectively.

Screen Shot 2018-03-01 at 11.47.10 AM Screen Shot 2018-03-01 at 11.47.16 AM

To verify the observations made by C-SAM and TDR non-destructive techniques, 3D XRM was performed using a 0.80μm/voxel and 4X objective at the region of

FIGURE 8 revealed a crack between the failure C4 bump and associated TSV. A physical cross-section was performed and the passivation cracks between the TSV and interposer backside redistribution layer (RDL) was observed as shown in FIGURE 9.

Screen Shot 2018-03-01 at 11.47.35 AM

In this case, 3D XRM provided 3D information for the FA engineer to focus on. Without the visual knowledge on the defect’s nature and location, the defect would have been missed during PFA.

Summary and conclusions

3D integration and packaging have brought about new challenges for effective defect localization, especially when traditional electrical fault isolation is not possible. 3D XRM enables 3D tomographic imaging of internal structures in chips, interconnects and packages, providing 3D structural information of failure areas without the need to destroy the sample. 3D XRM is a vital and powerful tool that helps failure analysis engineers to overcome FA challenges for novel 3D stacked-die packages.

Acknowledgement

This article is based on a paper that was presented at the 24th International Symposium on the Physical and Failure Analysis of Integrated Circuits (IPFA 2017).

References

  1. F. Altmann and M. Petzold, “Innovative Failure Analysis Techniques for 3-D Packaging Developments,” IEEE Design & Test, Vol. 33, No. 3, pp. 46-55, June 2016.
  2. C. Y. Liu, P. S. Kuo, C. H. Chu, A. Gu and J. Yoon, “High resolution 3D X-ray microscopy for streamlined failure analysis workflow,” 2016 IEEE 23rd International Symposium on the Physical and Failure Analysis of Integrated Circuits (IPFA), Singapore, 2016, pp. 216-219.
  3. M. Yazdan Mehr et al., “An overview of scanning acoustic microscope, a reliable method for non-destructive failure analysis of microelectronic components,” 2015 16th International Conference on Thermal, Mechanical and Multi-Physics Simulation and Experiments in Micro- electronics and Microsystems, Budapest, 2015, pp.1-4.
  4. J. M. Chin et al., “Fault isolation in semiconductor product, process, physical and package failure analysis: Importance and overview,” Microelectronics Reliability, Vol. 51, Issue 9, pp. 1440-8, Nov. 2011.
  5. W. Yuan et al., “Packaging Failure Isolation with Time-Domain Reflectometry (TDR) for Advanced BGA Packages,” 2007 8th International Conference on Electronic Packaging Technology, Shanghai, 2007, pp. 1-5.