Category Archives: Packaging Materials

The international team of scientist of Peter the Great St. Petersburg Polytechnic University (SPbPU), Leibniz University Hannover (Leibniz Universität Hannover) and the Ioffe Institute found a way to improve nanocomposite material which opens a new opportunities to use it in hydrogen economy and other industries. The obtained results are explained in the academic article “The mechanism of charge carrier generation at the TiO2–n-Si heterojunction activated by gold nanoparticles” published in journal Semiconductor Science and Technology.

The study is dedicated to the composite material, a semiconductor based on titanium dioxide. Its applications are widely studied by the researchers all over the world. But the processes which take place in this material are very complex. Therefore, to use the semiconductor more effectively, it is necessary to ensure that the energy enclosed between its layers can be released and transmitted.

In framework of the experiments the researchers of SPbPU, Leibniz University Hannover and Ioffe Institute propose a qualitative model to explain the complex processes.

The scientific group used a composite material consisting of a silicon wafer (standard silicon wafer used in electronic devices), gold nanoparticles and a thin layer of titanium dioxide. In the framework of the experiment to transfer the energy inside the material, the researchers intended to isolate nanoparticles from silicon. If nanoparticles are not isolated from the silicon wafer, then the energy can’t be transmitted neither to the silicon nor to the titanium dioxide. It leads to the energy loss.

“The obtained material was a silicon wafer with pillar-like structures grown on its surface. It was used as a substrate for the sample. Gold nanoparticles were situated on top of these pillars and the whole structure was coated with titanium oxide. Thus, nanoparticles contacted only titanium dioxide, and simultaneously were isolated from silicon. The number of boundaries between the layers decreased, we tried to describe the processes in the material. In addition, we assumed that this structure would increase the efficiency of using the energy of light illuminating the surface of our material”, says Dr. Maxim Mishin, professor of Physics, Chemistry, and Technology of Microsystems Equipment Department of SPbPU.

In St. Petersburg, an international scientific group established a model of a new structure, then the main part of the structure was created in Hannover: a silicon wafer with pillars and gold nanoparticles situated on top of it.

The experiment was performed as follows. At first, the wafer was oxidized, i.e. it was covered with a layer of the substrate, and gold nanoparticles were put on top of it.

“After that, we faced the next task: to create pillars and to perform the etching of the substrate so that it is remained under the particles and not and in between them. Considering that we are dealing with nanosizes, the diameter of gold nanoparticles is about 10 nanometers, and the height of the pillar is 80 nanometers, this is not a trivial task. The development of modern nanoelectronics makes it possible to use the so-called “dry” etching methods such as reactive ion etching”, adds Dr. Marc Christopher Wurz from the Institute of Micro Production Technology at Leibniz University Hannover.

According to scientists, the process of technology development had not been rapid: at the first stages of the experiment, while using the ion etching, all gold nanoparticles were simply demolished from the oxidized wafer. In the course of one week, the researchers were selecting the parameters for etching plasma system, so that the gold nanoparticles remained on the surface. The whole experiment was conducted within 10 days.

This scientific project is ongoing. The researchers mention that this nanocomposite material can be used in optical devices operating in the visible light spectrum. In addition, it can be used as a catalyst to produce hydrogen from water, or, for example, to purify water by stimulating the decomposition of complex molecules. In addition, this material may be useful as an element of a sensor which detects a gas leak or increased concentration of harmful substances in the air.

With companies like Google, Microsoft, and IBM all racing to create the world’s first practical quantum computer, scientists worldwide are exploring the potential materials that could be used to build them.

Now, Associate Professor Yang Hyunsoo and his team from the Department of Electrical and Computer Engineering at the National University of Singapore (NUS) Faculty of Engineering have demonstrated a new method which could be used to bring quantum computing closer to reality.

“The NUS team, together with our collaborators from Rutgers, The State University of New Jersey in the United States and RMIT University in Australia, showed a practical way to observe and examine the quantum effects of electrons in topological insulators and heavy metals which could later pave the way for the development of advanced quantum computing components and devices,” explained Assoc Prof Yang.

The findings of the study were published in the scientific journal Nature Communications in June 2018.

The advantage of quantum computers

Quantum computers are still in the early stages of development but are already displaying computing speeds millions of times faster than traditional technologies. As such, it is predicted that when quantum computing becomes more readily available, it will be able to answer some of the world’s toughest questions in everything from finance to physics. This remarkable processing power is made possible by the radical way that quantum computers operate – using light rather than electricity.

Classical computers push electrons through devices which code information into binary states of ones and zeros. In contrast, quantum computers use laser light to interact with electrons in materials to measure the phenomenon of electron “spin”. These spinning electron states replace the ones and zeros used as the basis for traditional computers, and because they can exist in many spin states simultaneously, this allows for much more complex computing to be performed.

However, harnessing information based on the interactions of light and electrons is easier said than done. These interactions are incredibly complex and like anything in the quantum world there is a degree of uncertainty when trying to predict behaviour. As such, a reliable and practical way to observe these quantum effects has been sought-after in recent research to help in the discovery of more advanced quantum computing devices.

Visualising quantum spin effects

The real breakthrough from the scientists at NUS was the ability to “see” for the first time particular spin phenomena in topological insulators and metals using a scanning photovoltage microscope.

Topological insulators are electronic materials that are insulating in their interior but support conducting states on their surface, thus enabling electrons to flow along the surface of the material.

Assoc Prof Yang and his team examined platinum metal as well as topological insulators Bi2Se3 and BiSbTeSe2. An applied electrical current influenced the electron spin at the quantum level for all of these materials and the scientists were able to directly visualise this change using polarised light from the microscope.

Additionally, unlike other observational techniques, the innovative experimental setup meant that the results could be gathered at room temperature, making this a practical method of visualisation which is applicable to many other materials.

Mr Liu Yang, who is a PhD student with the Department and first author of the study, said, “Our method can be used as a powerful and universal tool to detect the spin accumulations in various materials systems. This means that developing better devices for quantum computers will become easier now that these phenomena can be directly observed in this way.”

Next steps

Moving forward, Assoc Prof Yang and his team are planning to test their new method on more novel materials with novel spin properties. The team hopes to work with industry partners to further explore the various applications of this unique technique, with a focus on developing the devices used in future quantum computers.

By Pete Singer

Nitrous oxide (N2O) has a variety of uses in the semiconductor manufacturing industry. It is the oxygen source for chemical vapor deposition of silicon oxy-nitride (doped or undoped) or silicon dioxide, where it is used in conjunction with deposition gases such as silane. It’s also used in diffusion (oxidation, nitridation, etc.), rapid thermal processing (RTP) and for chamber seasoning.

Why these uses – and more importantly what happens to the gas afterward — may soon becoming under more scrutiny because it is being included for the first time in the IPPC (Intergovernmental Panel on Climate Change) GHG (Greenhouse Gas) guidelines. The IPCC has refined guidelines released in 2006 and expect to have a new revision in 2019. “Refined guidelines are actually up and coming and the inclusion of nitrous oxide in them is a major revision from the 2006 document,” said Mike Czerniak, Environmental Solutions Business development Manager, Edwards. Czerniak is on the IPPC committee and lead author of the semiconductor section.

Although the semiconductor industry uses a very small amount of N2O compared to other applications (dentistry, whip cream, drag racing, scuba diving), it is a concern because after CO2and CH4, N2O is the 3rd most prevalent man-induced GHG, accounting for 7% of emissions. According to the U.S. Environmental Protection Agency, 5% of U.S. N2O originates from industrial manufacturing, including semiconductor manufacturing.

Czerniak said the semiconductor industry been very proactive about trying to offset and reduce its carbon dioxide footprint. “The aspiration set by the world’s semiconductor council to reduce the carbon footprint of a chip to 30 percent of what it was in 2010, which itself was a massive reduction of what it used to be back in the last millennium,” he said. Unfortunately, although that trend had been going down for the first half of the decade, it started going up again in 2016. “although each individual processing step has a much lower carbon footprint than it used to have, the number of processing steps is much higher than they used to be,” Czerniak explain. “In the 1990s, it might take 300-400 processing steps to make a chip. Nowadays you’re looking at 2,000-4,000 steps.”

There are two ways of abating N20 so that it does not pollute the atmosphere: reduce it or oxidize it.  Oxidizing it – which creates NO2and NO (and other oxides know as NOx) — is not the way to go, according to Czerniak. “These oxides have their own problems. NOx is a gas that most countries are trying to reduce emissions of. It’s usually found as a byproduct of fuel combustion, particularly in things like automobiles and it adds to things like acid rain,” he said.

Edwards’ view is that it’s much better to minimize the formation of the NOx in the first place. “The good news is that it is possible inside a combustion abatement system where the gas comes in at the top, we burn a fuel gas and air on a combustor pad and basically the main reactant gas then is water vapor, which we use to remove the fluorine effluent, which is the one we normally try to get rid of from chamber cleans,” Czerniak said.

The tricky part is that information from the tool is required. “We can — when there is nitrous oxide present on a signal from the processing tool — add additional methane fuel into the incoming gas specifically to act as a reducing agent to reduce the nitrous oxide to nitrogen and water vapor,” he explained. “We inject it at just the right flow rate to effectively get rid of the nitrous oxide without forming the undesirable NOx byproducts.”

Figure 1 showshowcareful control of combustion conditions make them reduce rather than oxidizing during the N2O step by the addition of CH4. 30 slm N2O represents two typical process chambers.

“It’s not complicated technology,” Czerniak concluded. “You just have to do it right.”

Leti, a research institute of CEA Tech, and Soitec, a designer and manufacturer of innovative semiconductor materials, today announced a new collaboration and five-year partnership agreement to drive the R&D of advanced engineered substrates, including SOI and beyond. This agreement brings the traditional Leti-Soitec partnership to a whole new dimension and includes the launch of a world-class prototyping hub associating equipment partners to pioneer with new materials, The Substrate Innovation Center will feature access to shared Leti-Soitec expertise around a focused pilot line. Key benefits for partners include access to early exploratory sampling and prototyping, collaborative analysis, and early learning at the substrate level, eventually leading to streamlined product viability and roadmap planning at the system level.

Leading chip makers and foundries worldwide use Soitec products to manufacture chips for consumer applications targeting performance, connectivity, and efficiency with extremely low energy consumption. Applications include smart phones, data centers, automotive, imagers, and medical and industrial equipment, but this list is always growing, along with the need for flexibility to explore new applications starting at the substrate level. At the Substrate Innovation Center, located on Leti’s campus, Leti and Soitec engineers will explore and develop innovative substrate features, expanding to new fields and applications with a special focus on 4G/5G connectivity, artificial intelligence, sensors and display, automotive, photonics, and edge computing.

“Material innovation and substrate engineering make entire new horizons possible. The Substrate Innovation Center will unleash the power of substrate R&D collaboration beyond the typical product road maps, beyond the typical constraints,” said Paul Boudre, Soitec CEO. “The Substrate Innovation Center is a one-of-a-kind opportunity open to all industry partners within the semiconductor value chain.”

Whereas a typical manufacturing facility has limited flexibility to try new solutions and cannot afford to take risks with prototyping, the mission of the Substrate Innovation Center is to become the world’s preferred hub for evaluating and designing engineered substrate solutions to address the future needs of the industry, inclusive of all the key players, from compound suppliers to product designers. Using state of the art, quality-controlled clean room facilities, and the latest industry-grade equipment and materials, Leti and Soitec engineers will conduct testing and evaluation at all levels of advanced substrate R&D.

“Leti and Soitec’s collaboration on SOI and differentiated materials, which extends back to Soitec’s launch in 1992, has produced innovative technologies that are vital to a wide range of consumer and industrial products and components,” said Emmanuel Sabonnadière, Leti CEO. “This new common hub at Leti’s campus marks the next step in this ongoing partnership. By jointly working with foundries, fabless, and system companies, we provide our partners with a strong edge for their future products.”

Intel has won SEMI’s 2018 Award for the Americas. SEMI honored the celebrated chipmaker for pioneering process and integration breakthroughs that enabled the first high-volume Integrated Silicon Photonics Transceiver. The award was presented yesterday at SEMICON West 2018.

SEMI’s Americas Awards recognize technology developments with a major impact on the semiconductor industry and the world.

The Intel® Silicon Photonics 100G CWDM4 (Coarse Wavelength Division Multiplexing 4-lane) QSFP28 optical transceiver, a highly integrated optical connectivity solution, combines the power of optics and the scalability of silicon. The small form-factor, high-speed, low-power consumption 100G optical transceivers are used in optical interconnects for data communications applications, including large-scale cloud and data centers, and in Ethernet switch, router, and client telecommunications interfaces.

Dr. Thomas Liljeberg, senior director of R&D for Intel Silicon Photonics, accepted the award on behalf of Intel. Dr. Liljeberg is one of the technologists responsible for bringing Intel’s silicon photonics 100G transceivers to high-volume production.

“Every year SEMI honors key technological contributions and industry leadership through the SEMI Award,” said David Anderson, president, SEMI Americas. “Intel was instrumental in delivering technologies that will influence product design and system architecture for many years to come. Congratulations to Intel for this significant accomplishment.”

“The 2018 Award recognizes the enablement of high-volume manufacturing through technology leadership and collaboration with key vendors in the supply chain,” said Bill Bottoms, chairman of the SEMI Awards Advisory Committee. “Intel’s collaboration is a model for how the industry can accelerate innovation in the future.”

SEMI established the SEMI Award in 1979 to recognize outstanding technical achievement and meritorious contributions in the areas of Semiconductor Materials, Wafer Fabrication, Assembly and Packaging, Process Control, Test and Inspection, Robotics and Automation, Quality Enhancement, and Process Integration.

The SEMI Americas award is the highest honor conferred by the SEMI Americas region. It is open to individuals or teams from industry or academia whose specific accomplishments have a broad commercial impact and widespread technical significance for the entire semiconductor industry. Nominations are accepted from individuals of North American-based member companies of SEMI. For a list of past award recipients, visit www.semi.org/semiaward.

SEMI yesterday honored two industry leaders at SEMICON West 2018 for their outstanding accomplishments in developing Standards for the electronics and related industries. The SEMI Standards awards were announced at the SEMI International Standards reception.

The Technical Editor Award recognizes the efforts of a member to ensure the technical excellence of a committee’s Standards. This year’s recipient is Sean Larsen of Lam Research. Mr. Larsen has led the North America EHS Committee and multiple EHS task forces for over a decade. His knowledge of the Regulations, Procedure Manual, and Style Manual, combined with his vast experience in the industry, ensures that complex safety matters are explained in a clear, consistent manner, and ballot authors frequently rely on him for his technical skills in preparing ballots.

In addition to co-chairing the North America EHS Committee, Mr. Larsen is currently the co-leader of the SEMI S22 (Electrical Design) Revision TF, the SEMI S2 Non-Ionizing Radiation TF, the SEMI S2 Korean High Pressure Gas Safety TF, and the Control of Hazardous Energy TF.

The Corporate Device Member Award recognizes the participation of the user community and is presented to individuals from device manufacturers. This year’s recipient is Don Hadder of Intel. Mr. Hadder has been actively involved in the Standards Program for several years, and currently leads the Chemical Analytical Methods Task Force and chairs the North America Liquid Chemicals Committee. He has successfully re-energized the committee, which is now focused on enabling continued process control improvements for advanced nodes. He recently drove the development of a critical new standard: SEMI C96, Test Method for Determining Density of Chemical Mechanical Polish Slurries, the first document in a series of SEMI Standards that will be devoted specifically to CMP slurry users, IDMs, slurry suppliers, metrology manufacturers and OEM equipment suppliers.

Mr. Hadder has worked at Intel for 23 years, where his experience and system ownership has been in Diffusion, Wet Etch, Planar-CMP, Ultra-Pure Water, Waste Treatment Systems, Abatement and Vacuum Systems, Bulk and Specialty Gas, Bulk Chemical Delivery and Planar Chemical Delivery.

By Pete Singer

Increasingly complicated 3D structures such finFETs and 3D NAND require very high aspect ratio etches. This, in turn, calls for higher gas flow rates to improve selectivity and profile control. Higher gas flow rates also mean higher etch rates, which help throughput, and  higher rates of removal for etch byproducts.

“Gas flow rates are now approaching the limit of the turbopump,” said Dawn Stephenson, Business Development Manager – Chamber Solutions at Edwards Vacuum. “No longer is it only the process pressure that’s defining the size of the turbopump, it’s now also about how much gas you can put through the turbopump.”

Turbopumps operate by spinning rotors at very high rates of speed (Figure 1). These rotors propel gases and process byproducts down and out of the pump. The rotors are magnetically levitated (maglev) to reduce friction and increase rotor speed.

Figure 1. Spinning rotors propel gases and process byproducts out of the pump.

The challenge starts with processes that have high gas flow rates, over a thousand sccm, and lower chamber pressures, below 100 mTorr.  Such processes include chamber clean steps where high flows of oxygen-containing gases are used to remove and flush the process byproducts from inside the chamber, through Silicon via (TSV) in which SF6is widely used at high gas flowrates for deep silicon reactive ion etch (RIE) and more recently, gaseous chemical oxide removal (COR) which typically uses HF and NH3to remove oxide hard masks.

However, the challenge is intensified with the more general trend to higher aspect ratio etch across all technologies.

Stephenson said the maximum amount of gas you can put through a maglev turbo is determined by two things: the motor power and the rotor temperature. Both of these are affected adversely by the molecular weight of the gas. “The heavier the molecule, the lower the limit. For motor power, if the gas flow rate is increased, the load on the rotor is increased, and then you need more power. Eventually you reach a gas flow at which you exceed the amount of power you have to keep the rotor spinning and it will slow down,” she said.

The rotor temperature is an even bigger limiting factor. “As gas flow rates increase, the number of molecules hitting the rotor are increased. The amount of energy transferred into the rotors is also increased which elevates the temperature of the rotor. Because the rotor is suspended in a vacuum and because it’s levitated, it’s not very easy to remove that heat from the rotor because its primary thermal transfer is through radiation,” she explained.

Pumping heavier gases, particularly ones that have poor thermal conductivity, cause the rotor temperature to rise, leading to what is known as “rotor creep.”Rotor creep is material growth due to high temperature and centrifugal force (stress).  Rotor creep deformation over time narrows clearances between rotor and stator and can eventually lead to contact and catastrophic failure (Figure 2).

Figure 2. Edwards pumps have the highest benchmark for rotor creep life temperature in the industry, due to the use of a premium aluminum alloy as the base material for its mag-lev rotors, combined with a low stress design.

Where it gets even worse are in applications where the turbopump is externally heated to reduce byproduct deposition inside the pump. Such a heated pump will have a higher baseline rotor temperature and significantly lower allowable gas flowrates than an unheated one. This becomes a challenge particularly for the heated turbopumps on semiconductor etch and flat panel display processes using typical reactant gases such as HBr and SF6.  “Those are very heavy gases with low thermal conductivity and the maximum limit of the turbopump is actually quite low,” Stephenson said.

The good news is that Edwards has been diligently working to overcome these challenges. “What we have done to maximize the amount of gas you can put into our turbopumps is to  ensure our rotors can withstand the highest possible temperature design limit for a 10 year creep lifetime.   We use a premium alloy for the base rotor material and then beyond that we have done a lot of work with our proprietary modeling techniques to design a very low stress rotor because the creep is due to two factors: the temperature and the centrifugal stress. Because of those two things combined, we’re able to achieve the highest benchmark for rotor creep life temperature in the industry,” she said.

Furthermore, the company has worked on thermal optimization of the turbopump platform. “That means putting in thermal isolation where needed to try to help keep the rotor and motor cool. At the same time, we also need to keep the gas path hot to stop byproducts from depositing. We have also released a high emissivity rotor coating that helps keep the rotor cool,” Stephenson said. A corrosion resistant, black ceramic rotor coating is used to maximize heat radiation, which helps keep the rotor cool and gives more headroom on gas flowrate before the creep life temperature is reached.

Edwards has also developed a unique real-time rotor temperature sensor: Direct, dynamic rotor temperature reporting eliminates over-conservative estimated max gas flow limits and allows pump operation at real maximum gas flow in real duty cycle while maintaining safety and lifetime reliability.

In summary, enabling higher flows at lower process pressures is becoming a critical capability for advanced Etch applications, and Edwards have addressed this need with several innovations, including optimized rotor design to minimize creep, high emissivity coating, and real time temperature monitoring.

To eliminate voids, it is important to control the process to minimize moisture absorption and optimize a curing profile for die attach materials.

BY RONGWEI ZHANG and VIKAS GUPTA, Semiconductor Packaging, Texas Instruments Inc., Dallas, TX

Polymeric die attach material, either in paste or in film form, is the most common type of adhesive used to attach chips to metallic or organic substrates in plastic-encapsulated IC packages. It offers many advantages over solders such as lower processing temperatures, lower stress, ease of application, excellent adhesion and a wide variety of products to meet a specific application. As microelectronics move towards thinner, smaller form factors, increased functionality, and higher power density, void formation in die attach joints (FIGURE 1), i.e. in die attach materials and/or at die attach interfaces, is one of the key issues that pose challenges for thermal management, electrical insulation and package reliability.

Impact of voids

Voids in die attach joints have a significant impact on die attach material cracking and interfacial delamination. Voids increase moisture absorption. If plastic packages with a larger amount of absorbed moisture are subject to a reflow process, the absorbed moisture (or condensed water in the voids) will vaporize, resulting in a higher vapor pressure. Moreover, stress concentrations occur near the voids and frequently are responsible for crack initiation. On the other hand, voids at the interface can degrade adhesive strength. The combined effect of higher vapor pressure, stress concentration around the voids and decreased adhesion, as a result of void formation, will make the package more susceptible to delamination and cracking [1].

Additionally, heat is dissipated mainly through die attach layer to the exposed pad in plastic packages with an exposed pad. Voids in die attach joints can result in a higher thermal resistance and thus increase junction temperatures significantly, thereby impacting the power device performance and reliability.

And finally, voiding is known to adversely affect electrical performance. Voiding can increase the volume resistivity of electrically conductive die attach materials, while decreasing electrical isolation capability. Therefore, it is crucial to minimize or eliminate voids in die attach joints to prevent mechanical, thermal and electrical failures.

Void detection

The ability to detect voids is key to ensuring the quality and reliability of die attach joints. There are four common techniques to detect voids: (1) Scanning Acoustic Microcopy (SAM), (2) X-ray imaging, (3) cross-section or parallel polishing with optical or electron microscope, and (4) glass die/slide with optical microscope (Fig. 1). The significant advantage of SAM over other techniques lies in its ability to detect voids in different layers within a package non-destructively. Void size detection is limited by the minimal defect size detected by SAM. If the void is too small, it may not be detected at all, depending on the package and equipment used. X-ray analysis allows for non-destructive detection of voids in silver-filled die attach materials. However its limits lie in its low resolution and magnification, a low sensitivity for the detection of voids in a thick sample, and its inability to differentiate voids at different interfaces [2]. Cross-section or parallel polishing with electronic microscope provides a very high magnification image to detect small voids, although it is destructive and time-consuming. Glass die or glass substrate with an optical microscope provides a simple, quick and easy way to visualize the voids.

Potential root causes of voids and solutions

There are four major sources of voids: (1) air trapped during a thawing process, (2) moisture induced voids, (3) voids formed during die attach film (DAF) lamination, and (4) volatile induced voids.

Freeze-thaw voids When an uncured die attach paste in a plastic syringe is removed from a freezer (typically -40oC) to an ambient environment for thawing, the syringe warms and expands faster than the adhesive. This intro- duces a gap between syringe and the adhesive. Upon thawing, the adhesive will re-wet the syringe wall and air located in between the container and adhesive may become trapped. As a result, voids form. This is referred as freeze-thaw void [3]. The voids in pastes may cause incom- plete dispensing pattern leading to inconsistent bond line thickness (BLT) and die tilt, thus causing delamination. Planetary centrifugal mixer is the most commonly used and effective equipment to remove this type of void.

Moisture induced voids

Die attach material contains polar functional groups, such as hydroxyl group in epoxy resins and amide group in curing agents, which will absorb moisture from the environment during exposure in die attach process. As the industry moves to larger lead frame strips (100mm x 300mm), the total number of units on a lead frame strip increase significantly. As a result, die attach pastes may have been exposed to a production environment significantly longer before die placement. After die placement, there could also be a significant amount of waiting time (up to 24 hours) before curing. Both can result in a high moisture absorption in die attach pastes. Moreover, organic substrates can absorb moisture, while moisture may be present on metal lead frame surfaces. As temperatures increase during curing, absorbed moisture or condensed water will evolve as stream to cause voiding. Voids can also form at the DAF-substrate interface as a result of moisture uptake during the staging time between film attach and encapsulation process. Controlling moisture absorption of substrates and die attach materials at each stage before curing and production environment are critical to prevent moisture induced voids in die attach joints.

Void formation during DAF lamination

One challenge associated with DAF is voiding during DAF lamination, especially when it is applied to organic substrates [FIGURE 1(d)]. There is a correlation of void pattern with the substrate surface topography [4]. Generally, increasing temperature, pressure and press time can reduce DAF melt viscosity and enable DAF to better wet lead frame or substrates, thereby preventing entrapment of voids at die attach process. If the DAF curing percentage is high before molding, then DAF has limited flow ability, and thus cannot completely fill the large gaps on the substrate. Consequently, voids present at the interface between DAF and an organic substrate since die bonding process. But if DAF has a lower curing percentage before molding, then DAF can re-soft and flow into large gaps under heat and transfer pressure to achieve voids-free bond line post molding [4].

Volatile induced voids

Voids in die attach joints are generally formed during thermal curing since die attach pastes contain volatiles such as low molecular weight additives, diluents, and in some cases solvents for adjusting the viscosity for dispensing or printing. To study the effect of outgassing amounts on voids, we select three commercially available die attach materials with a significant difference in outgassing amounts using the same curing profile. As shown in FIGURE 2, as temperature increases, all die attach pastes outgas. DA1 shows a weight loss of 0.74wt%, DA2 3.1wt% and DA3 10.62wt%. Once volatiles start to outgas during thermal curing, they will begin to accumulate within the die attach material or at die attach interfaces. Voids begin to form by the entrapment of outgassing species or moisture. After voids initially form, voids can continue to grow until the volatiles have been consumed or the paste has been cured enough to form a highly cross- linked network. FIGURE 3 shows optical images of dices assembled onto glass slides using three die attach materials. As expected, DA1 shows no voids for both die sizes of 2.9mm x 2.9mm and of 9.0mm x 9.2mm, due to a very low amount of outgassing (0.74wt%). DA2 shows no voids for the small die size, but many small voids under the die periphery for the large die. Large voids are observed for DA3 for both die sizes since it has a very large amount of outgassing (10.62wt%). DA2 also shows voids even with a medium die size 6.4mm x 6.4mm [FIGURE 3(g)]. Differential Scanning Calorimetry (DSC) was used to further study the curing behaviors of DA2 and DA3, as shown in FIGURES 4 and 5. Comparing FIGURE 4 with FIGURE 5, it is interesting to observe the difference in thermal behavior of the two die attach materials. For DA2, as curing starts, the weight loss rate becomes slower, while the weight loss rate for DA3 accelerates as curing starts. It is very likely that the outgassing species in DA2 is reactive diluent, which has a lower weight loss rate when the reaction starts. But for DA3, outgassing is a non-reactive solvent, and possibly with other reactive species. The non-reactive solvent has a boiling point at 172.9oC, as verified in the DSC. Heat generated in the curing process accelerates evaporation of the solvent. The continuous, slow release outgassing amount during ramp and curing at 180oC explains the formation of small voids in DA2, while fast evaporation of solvent accounts for large voids in DA3. To reduce or eliminate voids during thermal curing, a simple and the most common approach is to use a two-step (or multi- step)cure.Thefirststepisdesignedtoremovevolatiles, followed by a second step of curing. With the first step at 120oC for 1h to remove more volatiles, DA2 shows significantly less voids for a die size of 6.4mm x 6.4mm [FIGURE 3(h)].

Ideally, the majority (if not all) of volatiles should be removed prior to the gelation point, which is defined as the intersection of G’ and G’’ in a rheological test. Because the viscosity of die attach, materials increases dramatically after their gelation point. A higher amount of volatiles released after gelation point (or later stage of curing) are more likely to form voids. Therefore, the combined characterization of TGA and DSC, as well as rheological test, provides a good guideline to design optimal curing profiles to minimize or eliminate voids.

Summary

This article provides an understanding of void impact in die attach joints, the techniques to detect voids, voiding mechanisms, and their corresponding solutions. To eliminate voids, it is important to control the process to minimize moisture absorption and optimize a curing profile for die attach materials. TGA, DSC and Rheometer are key analytical tools to optimize a curing profile to prevent voiding. In addition, many other properties such as modulus, coefficient of thermal expansion (CTE), and adhesion need to be considered when optimizing curing profiles. Last but not least, it is crucial to develop die attach materials with less outgassing and moisture absorption without compro- mising manufacturability, reliability and performance.

References

1. R.W.Zhang,etal., “Solving delamination in lead frame-based packages,” Chip Scale Review, 2015, pp. 44-48.
2. L. Angrisani, et al., “Detection and location of defects in electronic devices by means of scanning ultrasonic microcopy and the wavelet transform,” Measurement, 2002, Vol. 31, pp. 77-91.
3. D. Wyatt, et al., “Method for reducing freeze-thaw voids in uncured adhesives,” 2006 US 11/402,170.
4. Y. Q. Su, et al., “Effect of transfer pressure on die attach film void perfor- mance,” 2009 IEEE 11th Electronic Packaging Technology Conference, pp. 754-757.

RONGWEI ZHANG is a Packaging Engineer, and VIKAS GUPTA is an Engineering Manager, Semiconductor Packaging, Texas Instruments Inc., Dallas, TX.

If your laptop or cell phone starts to feel warm after playing hours of video games or running too many apps at one time, those devices are actually doing their job.

Whisking heat away from the circuitry in a computer’s innards to the outside environment is critical: Overheated computer chips can make programs run slower or freeze, shut the device down altogether or cause permanent damage.

As consumers demand smaller, faster and more powerful electronic devices that draw more current and generate more heat, the issue of heat management is reaching a bottleneck. With current technology, there’s a limit to the amount of heat that can be dissipated from the inside out.

Researchers at the University of Texas at Dallas and their collaborators at the University of Illinois at Urbana-Champaign and the University of Houston have created a potential solution, described in a study published online July 5 in the journal Science.

Researchers at the University of Texas at Dallas and their collaborators have created and characterized tiny crystals of boron arsenide, like the one shown here imaged with an electron microscope, that have high thermal conductivity. Because the semiconducting material efficiently transports heat, it might be used in future electronics to help keep smaller, more powerful devices from overheating. The research is described in a study published online July 5, 2018 in the journal Science. Credit: University of Texas at Dallas

Bing Lv (pronounced “love”), assistant professor of physics in the School of Natural Sciences and Mathematics at UT Dallas, and his colleagues produced crystals of a semiconducting material called boron arsenide that have an extremely high thermal conductivity, a property that describes a material’s ability to transport heat.

“Heat management is very important for industries that rely on computer chips and transistors,” said Lv, a corresponding author of the study. “For high-powered, small electronics, we cannot use metal to dissipate heat because metal can cause a short circuit. We cannot apply cooling fans because those take up space. What we need is an inexpensive semiconductor that also disperses a lot of heat.”

Most of today’s computer chips are made of the element silicon, a crystalline semiconducting material that does an adequate job of dissipating heat. But silicon, in combination with other cooling technology incorporated into devices, can handle only so much.

Diamond has the highest known thermal conductivity, around 2,200 watts per meter-kelvin, compared to about 150 watts per meter-kelvin for silicon. Although diamond has been incorporated occasionally in demanding heat-dissipation applications, the cost of natural diamonds and structural defects in manmade diamond films make the material impractical for widespread use in electronics, Lv said.

In 2013, researchers at Boston College and the Naval Research Laboratory published research that predicted boron arsenide could potentially perform as well as diamond as a heat spreader. In 2015, Lv and his colleagues at the University of Houston successfully produced such boron arsenide crystals, but the material had a fairly low thermal conductivity, around 200 watts per meter-kelvin.

Since then, Lv’s work at UT Dallas has focused on optimizing the crystal-growing process to boost the material’s performance.

“We have been working on this research for the last three years, and now have gotten the thermal conductivity up to about 1,000 watts per meter-kelvin, which is second only to diamond in bulk materials,” Lv said.

Lv worked with postdoctoral research associate Sheng Li, co-lead author of the study, and physics doctoral student Xiaoyuan Liu, also a study author, to create the high thermal conductivity crystals at UT Dallas using a technique called chemical vapor transport. The raw materials — the elements boron and arsenic — are placed in a chamber that is hot on one end and cold on the other. Inside the chamber, another chemical transports the boron and arsenic from the hot end to the cooler end, where the elements combine to form crystals.

“To jump from our previous results of 200 watts per meter-kelvin up to 1,000 watts per meter-kelvin, we needed to adjust many parameters, including the raw materials we started with, the temperature and pressure of the chamber, even the type of tubing we used and how we cleaned the equipment,” Lv said.

David Cahill and Pinshane Huang’s research groups at the University of Illinois at Urbana-Champaign played a key role in the current work, studying defects in the boron arsenide crystals by state-of-the-art electron microscopy and measuring the thermal conductivity of the very small crystals produced at UT Dallas.

“We measure the thermal conductivity using a method developed at Illinois over the past dozen years called ‘time-domain thermoreflectance’ or TDTR,” said Cahill, professor and head of the Department of Materials Science and Engineering and a corresponding author of the study. “TDTR enables us to measure the thermal conductivity of almost any material over a wide range of conditions and was essential for the success of this work.”

The way heat is dissipated in boron arsenide and other crystals is linked to the vibrations of the material. As the crystal vibrates, the motion creates packets of energy called phonons, which can be thought of as quasiparticles carrying heat. Lv said the unique features of boron arsenide crystals — including the mass difference between the boron and arsenic atoms — contribute to the ability of the phonons to travel more efficiently away from the crystals.

“I think boron arsenide has great potential for the future of electronics,” Lv said. “Its semiconducting properties are very comparable to silicon, which is why it would be ideal to incorporate boron arsenide into semiconducting devices.”

Lv said that while the element arsenic by itself can be toxic to humans, once it is incorporated into a compound like boron arsenide, the material becomes very stable and nontoxic.

The next step in the work will include trying other processes to improve the growth and properties of this material for large scale applications, Lv said.

By Paula Doe, SEMI

New metrology and inspection technologies and new analysis approaches made possible by improving compute technology offer solutions to finding the increasingly subtle variations in materials and subsystems that meet specifications but still cause defects on the wafer. More collaboration across the supply chain is helping too.  SEMICON West programs on materials and subsystems will address these issues.

New metrology approaches needed to deal with process margin challenges

As device process margins shrink and subtler materials variations cause unwanted variations,  the need for better monitoring of both surface and sub-surface material variations is driving a trend towards “metro-spection” – the convergence of metrology and inspection. “Device process margins have eroded to the point that traditional metrology strategies and techniques are no longer viable for controlling yield and parametric performance,” says Nanometrics Vice President Robert Fiordalice, who will speak in the materials program at SEMICON West. “Limited sampling capability, low throughput, insufficient sensitivity or the destructive nature of the techniques can often become problems. What’s more, deviations in material characteristics are not always determined by the initial quality of the material, but often arise from variations during the integration of the materials.”

“Device process margins have eroded to the point that traditional metrology strategies and techniques are no longer viable for controlling yield and parametric performance.” – Robert Fiordalice, Nanometrics

One new type of inline tool or line monitoring technology is Fourier Transform Infrared (FTIR) spectroscopy, traditionally used in quality control or tool characterization. Better sensitivity and higher throughput now enable rapid analysis and feedback for on-the-fly detection of subtle deviations in film properties that may compromise device performance or yield.

More advanced analytics will help extract new information from old metrology

More expensive metrology may not be required to identify subtle variations in in-spec materials that cause wafer defects. Today’s advanced compute capabilities now enable more sophisticated analysis of existing data and the identification of small but significant variations in raw materials and finished goods.

The figure of merit (FoM) values presented in certificate of analysis (CoA) reports miss subtle variations in raw material properties. Of particular note is the reduction of molecular weight distributions to a mean, and standard deviation, whereas variations in the tails are associated with pattern defects. Advanced compute capabilities now allow the industry to step beyond the FoM in favor of more holistic measures, enabling predictive analysis of resist chemical variations associated with specific pattern defects. Source: JSR Micro

“We often don’t need to find a new measure, but just a new way of looking at what we measure now,” says Jim Mulready, vice president of global quality assurance at JSR Micro. Mulready will speak in the SEMICON West program on materials defectivity issues. “The certificate of analysis reduces multiple measurements to a single figure of merit. But if we ignore all that raw data, we miss a chance to learn.  One of our sayings in quality is ‘Customers don’t feel the average, they feel the variation.’ In many electronic materials, the quality of the raw material can have a big impact on the final performance, but the types of analysis needed to look at the tails of the distribution of these measures (such as molecular weight) in detail used to be really hard to do. Now it’s becoming increasingly straightforward and affordable.”

 “We often don’t need to find a new measure, but just a new way looking at what we measure now.” – Jim Mulready, JSR Micro

Mulready says tools now available in the data processing sector enable the identification of subtle variations in materials that can cause defects on the wafer. These tools use methods like detailed subtractions of chromatography curves of polymer raw materials or analysis of tails of distributions of molecular weights. “Our job now is to drive these kinds of more sophisticated data analysis back into our chemical supply chain as well,” says Mulready. “We must work more closely with our suppliers to integrate their raw materials into our products. The reason the JSRs of the world exist is as a safety valve to reduce the variation from the chemical industry before it gets to the fab.”

Continued collaboration with equipment suppliers required as well

While the industry has been talking about the need for tighter collaboration between materials suppliers and equipment manufacturers for years, it still doesn’t always happen. “The material supplier and the equipment maker are tied together like kids in a three-legged race when we deliver an integrated system for consistent on-wafer performance,” says Cristina Chu, TEL/NEXX director of strategic business development, another speaker in the materials program.  “When we introduce changes to the tool hardware, we need to make sure it doesn’t upset the system. Similarly, we need the material supplier to send a bottle over when a new chemistry formulation is under development. If a new chemistry runs into problems in the field, it will take much more time for both of us to fix it at the customer site. The toolmaker can provide a slightly different perspective on applications, while being more objective than a customer on how the formulation performs compared to earlier versions.”

The material supplier and the equipment maker are tied together like kids in a three-legged race when we deliver an integrated system for consistent on-wafer performance.” – Cristina Chu, TEL/NEXX

Regular and ongoing collaboration between chemistry suppliers and toolmakers enables the highest quality system solution to reach the customer. Chu notes that her team tries to maintain consistent collaborations with material suppliers across changes in organizations as the business environment changes. “For consistent on-wafer capabilities, we need a consistent collaboration process with chemistry suppliers. We need to meet with materials providers at a regular cadence throughout their development process. We need to check back with them as we scale up results from the coupon to the wafer level and to work out the kinks in the integrated solution together. The quality and consistency of our combined performance at the customer depends on ensuring the quality and consistency of our development and evaluation process as well.”

Fabs and subsystems suppliers look to pilot data sharing program to improve process margins

With ever tighter process margins, subtle variations in parameters that don’t appear in the specifications are also compromising results on the wafer, and neither the fab nor the supplier alone has the full information needed to improve performance. To help, a SEMI standards group is developing a protocol for a pilot program to standardize and automate some data sharing.

“In order for engineers to have constructive conversations about how to improve performance, we all need to exchange more information.” – Eric Bruce, Samsung Austin

The fab knows that performance is best with a particular parameter value, and knows when performance fluctuates,  but often faces a black box problem with no way of knowing what exactly is wrong. In the rush to get the tool back up, the fab engineers may not get around to emailing the supplier about the issue for some time. The subsystems supplier, on the other hand, may know the cause of the variation,  but likely has no way of knowing the critical parameters or ideal target valuesfor the fab’s process..  “In order for engineers to have constructive conversations about how to improve performance, we all need to exchange more information,” says Eric Bruce, Samsung Austin diffusion engineer, and co-chair of the SEMI standards effort working on the issue, who will speak in the subsystems program at SEMICON West.

A potential solution could be to create a standard and automated process to share particular data, agreed to in the purchasing contract, whereby the subsystems supplier shares more information about their parameters with the fab, and the fab in return gives feedback on what parameters work best to drive improved performance. The best place to start will likely be on parts that do not contain core yield-related IP, but where usage and lifetime information is useful.

“We’re looking for people to participate in a pilot program to work together with suppliers to try sharing some information to improve performance,” says Bruce. “There’s a lot of this sharing in the backroom anyway, but this could make it fast and automated, and make everyone’s engineering job a lot easier.”