Tag Archives: letter-wafer-tech

By Ed Korczynski

As the commercial IC fabrication industry continues to shrink field-effect transistor (FET) sizes, 2D planar structures evolved into 3D fins which are now evolving into 3D stacks of 2D nano-sheets. While some researchers continue to work on integrating non-silicon “alternate channel” materials into finFETs for next generation logic ICs, published results from labs around the world now show that nano-wires or nano-sheets of silicon will likely follow silicon finFETs in high-volume manufacturing (HVM) fabs. 

Today’s finFETs are formed using self-aligned multi-patterning (SAMP) process flows with argon-fluoride immersion (ArFi) deep ultra-violet (DUV) steppers to provide arrays of equal-width lines. A block-mask can then pattern sets of lines into different numbers of fins per transistor to allow for different maximum current flows across the chip. When considering the next CMOS device structure to replace finFETs in commercial HVM we must anticipate the need to retain different current flows (ION) across the IC.

Gate-all-around (GAA) FETs can provide outstanding ION/IOFFratios, and future logic ICs could be built using either horizontal or vertical GAA devices. While vertical-GAA transistors have been explored for memory chips, their manufacturing process flows are significantly different from  those used to form finFETs. In contrast, horizontal-GAA FETs processing can be seen as a logical extension of flows already developed and refined for fin structuring.

“With a number of scaling boosters, the industry will be able to extend finFET technology to the 7 or even 5nm node,” said An Steegen, EVP at imec’s Semiconductor Technology and Systems division. “Beyond, the gate-all-around (GAA) architecture appears as a practical solution since it reuses most of the finFET process steps.”

The figure shows simplified cross-sections of a finFET with fin height (FH) of 50 nm along with two different stacks of lateral nano-sheets (LNS, also known as horizontal nano-sheets or HNS), where the current flows would be normal to the cross-section. HNS are variations of horizontal nano-wires (HNW) with the wires widened, shown as 11nm and 21nm in the figure. The HNS are epitaxial-silicon grown separated by sacrificial sacrificial silicon-germanium (SiGe) spacer layers.

Cross-sectional schematics of idealized (left) 50nm high finFET, (center) 5nm high by 11nm wide lateral-nano-sheets at 12-18nm vertical pitch, and (right) lateral-nano-sheets 21nm wide. (Source: imec)

In an exclusive interview with Solid State Technology, Steegen discussed a few details of the process extensions needed to convert finFETs into HNS-FETs. The same work-function ALD metals can be used to tune threshold voltages such that one epi-stack process can grow silicon for both n-type and p-type FETs. Happily, no new epitaxial reactors nor precursor materials are needed. Isotropic etch of the SiGe vertical spacers, and then filling the spaces with a dielectric deposition may be the only new unit-processes needed.

Alternate channel wires and sheets

At the 2018 Symposia on VLSI Technology and Circuits, imec presented two papers on germanium as an alternate channel material for nanowire pFET devices. In the first paper they studied the electrical properties of strained germanium nanowire pFETs as high-end analog and high-performance digital solutions. The second paper demonstrated vertically-stacked GAA highly-strained germanium nanowire pFETs.

The commercial IC fab industry has considered use of alternate channels for planar devices and for finFETs, yet so far has found extensions of silicon to work well-enough for pFETs. Likewise, the first generation of HNS will likely use silicon channels for both nFETs and pFETs. Germanium GAA pFETs thus represent the ability to shrink HNS devices for future nodes.

Data economy era begins


July 10, 2018

By Shannon Davis

Speaking at imec ITF Forum on Tuesday, Scott DeBoer, Executive Vice President of Technology Development at Micron opened his keynote address with a video that featured astounding statistics: Micron memory and storage is a part of storing the data generated by practically every type of smart device and high speeding computer processing – nearly 2.5 quintillion bytes per day.

“We’re turning information into insights and activating data to reach your higher realms of productivity and innovation,” the video’s narrator said. “We are Micron, and we are transforming how the world uses information to enrich lives.”

This would be the central theme of DeBoer’s talk, as he outlined the disruptive technology advancements taking place in the memory world and the markets they impact. According to DeBoer, we are in the early stages of the data economy.

The data economy in 2017, DeBoer indicated in his presentation, reported about 22,000 billion gigabytes created that year, compared to previous computer eras in the earlier part of the century, when about 250 billion gigabytes were created per year on average. The early stages of artificial intelligence, smart businesses, smart homes, and the interconnection of so many devices led DeBoer to make an astonishing prediction.

“Looking forward to 2021,” he said, “I’m projecting now: 62,000 billion gigabytes [per year]. Just a phenomenal growth path.”

DeBoer said continued scaling of DRAM and 3D NAND as well as the emergence of 3D XPoint memory technology would be responsible for helping maintain this kind of explosive growth in the memory sector. 3D XPoint memory technology is considered a storage class memory, and, according to DeBoer, is the only emerging memory currently.

“The way that we approach memory technology today…is quite different,” said DeBoer.

DRAM technology 15 years ago, he said, was built around enabling a personal computer system, where the quality requirements for power and performance were well-defined, and scaling continued along an expected path for many years. Today, however, the broad spectrum of technologies available and emerging markets today puts varying requirements on DRAM technology

“The same DRAM component that is ideal for a data center is absolutely not ideal for either automotive or for a mobile kind of application,” said DeBoer.

In addition to scaling, Micron has had to identify different kinds of innovations, thinking outside the box to get the kinds of performance and cost effectiveness through the years in ways that were different than just scaling memory chips.

“It’s not just about scaling, it’s about coming up with other kinds of ideas for being able to improve performance and cost structure to get those high densities for these applications,” DeBoer said. One example of this he discussed was CMOS under array, which is taking 3D technology and performance to a new level: “By taking that logic technology and putting it all underneath your array and changing the architecture of the memory, you can fundamentally change the cost structure and you fundamentally changed the performance.”

DeBoer explained that this technology take a manufacturable density of NAND and basically uses the infrastructure of that technology, and the new technology is simply the interconnect between the two layers. This, he said, takes the pressure off of the equipment industry in terms of a variety of process capabilities. It also paves the way for future NAND scaling.

Near the end of his presentation, the audience chuckled along with him as DeBoer talked about building a computer at home with his son over the fourth of July holiday weekend.

“I’m probably one of the only people that actually appreciated the fact that the memory cost was very high,” he laughed.

Process-induced overlay errors from outside the litho cell have become a significant contributor to the overlay error budget including non-uniform wafer stress.

BY HONGGOO LEEa, SANGJUN HANa, JAESON WOOa, JUNBEOM PARKa, CHANGROCK SONGa, FATIMA ANISb, PRADEEP VUKKADALAb, SANGHUCK JEONc, DONGSUB CHOIc, KEVIN HUANGb, HOYOUNG HEOb, MARK D SMITHb, JOHN C. ROBINSONb

aSK Hynix, Korea
bKLA-Tencor Corp., Milpitas, CA cKLA-Tencor Korea, Korea

As ground rules shrink, advanced technology nodes in semiconductor manufacturing demand smaller process margins and hence require improved process control. Overlay control has become one of the most critical parameters due to the shrinking tolerances and strong correlation to yield. Process-induced overlay errors, from outside the litho cell, including non-uniform wafer stress, has become a significant contributor to the error budget. Previous studies have shown the correlation between process-induced stress and overlay and the opportunity for improvement in process control [1, 2]. Patterned wafer geometry (PWG) metrology has been used to reduce stress-induced overlay signatures by monitoring and improving non-litho process steps or by compensation for these signatures by feed forward corrections to the litho cell [3,4]. Of paramount impor- tance for volume semiconductor manufacturing is how to improve the magnitude of these signatures, and the wafer to wafer variability. Standard advanced process control (APC) techniques provide a single set of control parameters for all wafers in a lot, and thereby only provide aggregate corrections on a per chuck basis. This work involves a novel technique of using PWG metrology to provide improved litho-control by wafer- level grouping based on incoming process induced overlay.

Wafer stress induced overlay is becoming a major challenge in semiconductor manufacturing, and the percentage contribution to the overlay budget is increasing. Addressing non-litho overlay is paramount to reducing wafer level variability. The amplitude of stress and the overlay budget differ by market segment. We observe from FIGURE 1 that the 3D NAND, for example, has the largest magnitude of wafer shape induced stress, but also has a relatively large overlay budget of 8 to 20 nm. DRAM, on the other hand, has less stress, but has a much tighter overlay spec of 2 to 3 nm. The relative stress level and overlay budget dictate different process control use cases. For the case of 3D NAND, the improved overlay can be achieved using the PWG stress data for process monitoring as mentioned earlier, or by directly providing the stress based feed forward corrections to the litho cell [3, 4]. In this work, we will focus on the DRAM device application. Key topics include identifying process signatures in the shape data, and using those signatures to reduce within lot variability.

Firstly, we will discuss the connection between wafer shape and overlay. During integrated circuit manufacturing many layers are printed on a silicon wafer. There is a critical need to align precisely pattern layers to an underlying pattern. This requirement is often complicated tighter overlay spec of 2 to 3 nm. The relative stress level and overlay budget dictate different process control use cases. For the case of 3D NAND, the improved overlay can be achieved using the PWG stress data for process monitoring as mentioned earlier, or by directly providing the stress based feed forward corrections to the litho cell [3, 4]. In this work, we will focus on the DRAM device application. Key topics include identifying process signa- tures in the shape data, and using those signatures to reduce within lot variability.

Firstly, we will discuss the connection between wafer shape and overlay. During integrated circuit manufac- turing many layers are printed on a silicon wafer. There is a critical need to align precisely pattern layers to an underlying pattern. This requirement is often complicated by process induced stress variations distorting the under- layer pattern, as illustrated in FIGURE 2 [5, 6]. A reference layer pattern is formed at a certain level N (or layer N) and the pattern is initially defined by the characteristic length L shown. To form level N+1, a film is first deposited on top of level N. Film stress causes the wafer to warp in free-state resulting in a change to shape of wafer. This is typically manifested as both out-of-plane displacement (OPD) and in-plane displacement (IPD), affecting lateral placement of the under-layer pattern (level N). To print the level N+1 pattern the wafer is forced flat (e.g. lithog- raphy vacuum chucked). For the most part, chucking the wafer fully reverses the out-of-plane displacement but the in-plane displacement is only partially reversed. Thus, the under-layer pattern is now displaced relative to where it was originally printed. If level N+1 pattern is printed without correcting for the under-layer distortion, it results in misalignment or overlay error between the two layers. Such an overlay error is known as process- induced or process-stress induced overlay error and it can be caused by any type of stress inducing semiconductor process such as film deposition, thermal anneal, etch, CMP, etc.

Wafer shape is measured by a unique implementation of a dual-Fizeau interferometer on KLA-Tencor Corporation’s WaferSightTM PWG patterned wafer geometry and nanotopography metrology system [7]. Simultaneous back side and front side measurements are made with the wafer in a vertical orientation to eliminate gravitational distortion.

Overlay is measured on a KLA-Tencor Corporation ArcherTM 500 overlay metrology system using Archer AIM® optical imaging metrology targets.

It has been shown that process-induced overlay error can be accurately estimated from the change in shape induced by semiconductor processes [2, 6, 8, 9]. FIGURE 3 shows a simplified schematic of a semiconductor process flow of a single layer. To estimate potential overlay error induced by processes between the reference lithography step (e.g. level N) and the current lithography step (e.g. level N+1), it is necessary to make wafer geometry measurement at the two indicated points in the figure as “pre” and “post”, corresponding to before and after the shape or stress inducing process steps. Once wafer geometry measure- ments become available, the change in the shape induced by processing is calculated as the difference between two measurements. Process-induced overlay error can then be calculated from the shape change by using one of several algorithms that have been developed [2, 6, 8, 9]. In this paper, we use an advanced IPD algorithm based on two-dimensional plate mechanics for the accurate estimation of the process-induced overlay error referred to as GEN3 [2].

Shape based overlay for DRAM

As discussed previously, different semiconductor processes have varying levels of stress and different overlay error budgets, including 3D NAND, DRAM, logic, etc. These differences require different process control use cases, such as feedback, feed forward, grouping, etc., alone or in combination. In this work we describe an advanced grouping process control use case for DRAM in order to minimize overlay. For this investigation we look at a specific implementation of wafer grouping which is appropriate to R&D environments and ramp-up of high volume manufacturing (HVM) called here send-ahead grouping (SG). The more general grouping use case for HVM will be addressed in a future report.

In order to meet the tight overlay specifications for the next generation DRAM devices, a send-ahead grouping (SG) based on the shape data has been evaluated. The flow of the proposed SG is outlined in FIGURE 4. Firstly, all the wafers in a lot are measured with a PWG tool for both “pre” and “post” layers. The shape data from the difference of these measurements is then used in the GEN3 algorithm to determine stress or shape based predicted overlay. The wafers are then grouped by similarity of wafer signatures. Grouping optimi- zation is performed using the predicted overlay after removing the POR scanner alignment model. The grouping optimi- zation: (i) decides the optimal number of process signatures; (ii) identifies the process signatures; and, (iii) provides a list of recommended wafers for metrology and exposure (step 2 in Fig. 4). The selected wafers are then exposed by the scanner in step 3 and the overlay measurement is performed in step 4. Finally, the correctable coefficients for each group will be calculated separately using the overlay metrology data. The exposed wafers will be reworked and then the entire lot will be exposed using the group by group corrections.

Within lot variability

The work is aimed at reducing the within lot variability. The within lot variability or wafer by wafer (WxW) variability is becoming one of the most important challenges to achieve tight overlay speci- fications for next generation DRAM devices. First we quantify within lot variability for both the shape and the overlay data using a rigorous analysis of variance (ANOVA). We analyzed seven lots individually and the results for both the overlay and PWG data are presented in FIGURES 5 and 6 respectively. The overlay data show an average of 3.6 nm WxW variation in both the X and Y direction. The shape based overlay average within lot variation is 0.55 nm in X and 0.46 nm in the Y direction.

It should be noted that the within lot variation of the overlay data is comprised of different sources and the shape based overlay explains only part of the total within lot overlay variation. FIGURE 7 shows the ratio % of the within lot variation shape based overlay versus the total overlay for both the X and Y direction. It can be seen that shape overlay can explain as much as up to 25% of the total overlay variability. These findings indicate that minimizing the impact of stress based overlay, from processes outside the litho cell, will provide potentially significant improvement, which is critical in the drive towards 2 nm overlay.

DRAM clustering results

For all of the analyses presented in this study, the GEN3 algorithm was used to calculate stress based overlay. To perform grouping the scanner alignment model was first removed from the stress based overlay for each wafer. The alignment removes some of the within lot varia- tions, however, wafer level alignment is not sufficient to remove all the wafer level variations. One useful way to visualize data variation is by performing Principle Component Analysis (PCA) of the data. By performing PCA, we express data in terms of Eigen functions of the covariance matrix of the data. Eigen values of the covariance matrix are calculated such that the first principle component explains the largest variation of the data, the second explains the second largest variation and so on. The coefficient for each principle component (PC) is referred to as the score. FIGURE 8 shows scores for the PC1 (first principle component) versus the PC2 (second principle component) for all the wafers for a single lot using stress based overlay. Two distinct groups, indicating two distinct process signatures can clearly be observed in this lot.

The same analysis was performed for the rest of the six lots as shown in FIGURE 9. For all the lots in this example, two signatures can clearly be observed in their leading scores plot. Some excursion wafers were removed from the analysis. After observing these clear process signature groupings, it was confirmed that the signatures correspond to the two stages of a process tool. This clearly proves that the stress overlay grouping method can successfully identify and distinguish significant process signatures. It should be noted that in the general case the optimal number of groups would not necessarily be two.

We quantified the stress overlay grouping by performing comprehensive send-ahead grouping (SG) simulation study. Grouping optimization was performed using the shape data to select optimal number of groups and also the send-ahead wafers for processing and metrology. Then using the send-ahead wafers for each group, ideal corrections were simulated and applied to each group in the lot. From the composite group residual, |mean|+3σ for each wafer was recorded. The residual |mean|+3σ was also calculated using the standard plan of record (POR) wafers. The root mean square for the average of the |mean|+3σ for X and Y is compared between SG and POR in FIGURE 10. The average |mean|+3σ improved by more than 0.5 nm using the SG solution.

The range is defined as the difference of the maximum and minimum |mean|+3σ per lot for both the X and Y direction. FIGURE 11 shows the comparison of the RMS of X and Y ranges for the six lots. The range has been improved by about 1 nm, underscoring the benefit of controlling wafer level variation by using shape data to identify signatures and group wafers for exposure and metrology.

Conclusions

Process induced overlay errors from outside the litho cell have become a significant contributor to the overlay error budget. It is no longer sufficient to focus exclusively on litho cell overlay improvement. Addressing non-litho overlay is key to reducing wafer level variability. We demonstrated a novel technique of using PWG metrology to provide improved litho control by wafer-level grouping based on incoming process induced overlay in a 19 nm DRAM manufacturing process driving towards a 2 nm overlay budget. Wafer to wafer variability range was reduced by around 1 nm across the lots in this study. Future directions include a full HVM implementation of the grouping methodology.

References

1. Characterization and mitigation of overlay error on silicon wafers with nonuniform stress, T. Brunner, et. al., SPIE Volume 9052: Optical Microlithography XXVII, April 2014.
2. Patterned wafer geometry (PWG) metrology for improving process-induced overlay and focus problems, Timothy A. Brunner, et. al., SPIE Volume 9780: Optical Microlithography XXIX, 97800W March 2016.
3. Improvement of process control using wafer geometry for enhanced manufacturability of advanced semiconductor devices, Honggoo Lee, et. al., SPIE Volume 9424: Metrology, Inspection, and Process Control for Microlithography XXIX, April 2015.
4. Lithography overlay control improvement using patterned wafer geometry for sub-22nm technology nodes, Joel Peterson, et. al., SPIE Volume 9424: Metrology, Inspection, and Process Control for Micro- lithography XXIX, April 2015.
5. 5. Relationship between localized wafer shape changes induced by residual stress and overlay errors, K. T. Turner, et. al., Volume 11(1), J. Micro/ Nanolithog. MEMS MOEMS, 013001 December 2012..
6. Characterization of Wafer Geometry and Overlay Error on Silicon Wafers with Nonuniform Stress, T. A. Brunner, et. al., Volume 12(4), Journal of Micro/ Nanolithography, MEMS, and MOEMS 0001, 043002-043002, September 2013.
7. “Interferometry for wafer dimensional metrology,” , K. Freischlad, S. Tang, and J. Grenfell, Proc. SPIE, 6672,667202 (2007).
8. Monitoring process-induced overlay errors through high resolution wafer geometry measurements, K. T. Turner, et. al., SPIE Volume 9050: Metrology, Inspection, and Process Control for Microlithog- raphy XXVIII, 905013, April 2014.
9. Process tool monitoring and matching using inter- ferometry technique, Doug Anberg, et. al., SPIE Volume 9778: Metrology, Inspection, and Process Control for Microlithography XXX, 977831, April 2015.

Reprinted with permission. Original source: Honggoo Lee, Sangjun Han, Jaeson Woo, Junbeom Park, Changrock Song, et al., “Patterned Wafer Geometry Grouping for Improved Overlay Control,” Metrology, Inspection, and Process Control for Microlithography XXXI, edited by Martha I. Sanchez, Vladimir A. Ukraintsev, Proc. of SPIE Vol. 10145, 101450O, (2017).

To eliminate voids, it is important to control the process to minimize moisture absorption and optimize a curing profile for die attach materials.

BY RONGWEI ZHANG and VIKAS GUPTA, Semiconductor Packaging, Texas Instruments Inc., Dallas, TX

Polymeric die attach material, either in paste or in film form, is the most common type of adhesive used to attach chips to metallic or organic substrates in plastic-encapsulated IC packages. It offers many advantages over solders such as lower processing temperatures, lower stress, ease of application, excellent adhesion and a wide variety of products to meet a specific application. As microelectronics move towards thinner, smaller form factors, increased functionality, and higher power density, void formation in die attach joints (FIGURE 1), i.e. in die attach materials and/or at die attach interfaces, is one of the key issues that pose challenges for thermal management, electrical insulation and package reliability.

Impact of voids

Voids in die attach joints have a significant impact on die attach material cracking and interfacial delamination. Voids increase moisture absorption. If plastic packages with a larger amount of absorbed moisture are subject to a reflow process, the absorbed moisture (or condensed water in the voids) will vaporize, resulting in a higher vapor pressure. Moreover, stress concentrations occur near the voids and frequently are responsible for crack initiation. On the other hand, voids at the interface can degrade adhesive strength. The combined effect of higher vapor pressure, stress concentration around the voids and decreased adhesion, as a result of void formation, will make the package more susceptible to delamination and cracking [1].

Additionally, heat is dissipated mainly through die attach layer to the exposed pad in plastic packages with an exposed pad. Voids in die attach joints can result in a higher thermal resistance and thus increase junction temperatures significantly, thereby impacting the power device performance and reliability.

And finally, voiding is known to adversely affect electrical performance. Voiding can increase the volume resistivity of electrically conductive die attach materials, while decreasing electrical isolation capability. Therefore, it is crucial to minimize or eliminate voids in die attach joints to prevent mechanical, thermal and electrical failures.

Void detection

The ability to detect voids is key to ensuring the quality and reliability of die attach joints. There are four common techniques to detect voids: (1) Scanning Acoustic Microcopy (SAM), (2) X-ray imaging, (3) cross-section or parallel polishing with optical or electron microscope, and (4) glass die/slide with optical microscope (Fig. 1). The significant advantage of SAM over other techniques lies in its ability to detect voids in different layers within a package non-destructively. Void size detection is limited by the minimal defect size detected by SAM. If the void is too small, it may not be detected at all, depending on the package and equipment used. X-ray analysis allows for non-destructive detection of voids in silver-filled die attach materials. However its limits lie in its low resolution and magnification, a low sensitivity for the detection of voids in a thick sample, and its inability to differentiate voids at different interfaces [2]. Cross-section or parallel polishing with electronic microscope provides a very high magnification image to detect small voids, although it is destructive and time-consuming. Glass die or glass substrate with an optical microscope provides a simple, quick and easy way to visualize the voids.

Potential root causes of voids and solutions

There are four major sources of voids: (1) air trapped during a thawing process, (2) moisture induced voids, (3) voids formed during die attach film (DAF) lamination, and (4) volatile induced voids.

Freeze-thaw voids When an uncured die attach paste in a plastic syringe is removed from a freezer (typically -40oC) to an ambient environment for thawing, the syringe warms and expands faster than the adhesive. This intro- duces a gap between syringe and the adhesive. Upon thawing, the adhesive will re-wet the syringe wall and air located in between the container and adhesive may become trapped. As a result, voids form. This is referred as freeze-thaw void [3]. The voids in pastes may cause incom- plete dispensing pattern leading to inconsistent bond line thickness (BLT) and die tilt, thus causing delamination. Planetary centrifugal mixer is the most commonly used and effective equipment to remove this type of void.

Moisture induced voids

Die attach material contains polar functional groups, such as hydroxyl group in epoxy resins and amide group in curing agents, which will absorb moisture from the environment during exposure in die attach process. As the industry moves to larger lead frame strips (100mm x 300mm), the total number of units on a lead frame strip increase significantly. As a result, die attach pastes may have been exposed to a production environment significantly longer before die placement. After die placement, there could also be a significant amount of waiting time (up to 24 hours) before curing. Both can result in a high moisture absorption in die attach pastes. Moreover, organic substrates can absorb moisture, while moisture may be present on metal lead frame surfaces. As temperatures increase during curing, absorbed moisture or condensed water will evolve as stream to cause voiding. Voids can also form at the DAF-substrate interface as a result of moisture uptake during the staging time between film attach and encapsulation process. Controlling moisture absorption of substrates and die attach materials at each stage before curing and production environment are critical to prevent moisture induced voids in die attach joints.

Void formation during DAF lamination

One challenge associated with DAF is voiding during DAF lamination, especially when it is applied to organic substrates [FIGURE 1(d)]. There is a correlation of void pattern with the substrate surface topography [4]. Generally, increasing temperature, pressure and press time can reduce DAF melt viscosity and enable DAF to better wet lead frame or substrates, thereby preventing entrapment of voids at die attach process. If the DAF curing percentage is high before molding, then DAF has limited flow ability, and thus cannot completely fill the large gaps on the substrate. Consequently, voids present at the interface between DAF and an organic substrate since die bonding process. But if DAF has a lower curing percentage before molding, then DAF can re-soft and flow into large gaps under heat and transfer pressure to achieve voids-free bond line post molding [4].

Volatile induced voids

Voids in die attach joints are generally formed during thermal curing since die attach pastes contain volatiles such as low molecular weight additives, diluents, and in some cases solvents for adjusting the viscosity for dispensing or printing. To study the effect of outgassing amounts on voids, we select three commercially available die attach materials with a significant difference in outgassing amounts using the same curing profile. As shown in FIGURE 2, as temperature increases, all die attach pastes outgas. DA1 shows a weight loss of 0.74wt%, DA2 3.1wt% and DA3 10.62wt%. Once volatiles start to outgas during thermal curing, they will begin to accumulate within the die attach material or at die attach interfaces. Voids begin to form by the entrapment of outgassing species or moisture. After voids initially form, voids can continue to grow until the volatiles have been consumed or the paste has been cured enough to form a highly cross- linked network. FIGURE 3 shows optical images of dices assembled onto glass slides using three die attach materials. As expected, DA1 shows no voids for both die sizes of 2.9mm x 2.9mm and of 9.0mm x 9.2mm, due to a very low amount of outgassing (0.74wt%). DA2 shows no voids for the small die size, but many small voids under the die periphery for the large die. Large voids are observed for DA3 for both die sizes since it has a very large amount of outgassing (10.62wt%). DA2 also shows voids even with a medium die size 6.4mm x 6.4mm [FIGURE 3(g)]. Differential Scanning Calorimetry (DSC) was used to further study the curing behaviors of DA2 and DA3, as shown in FIGURES 4 and 5. Comparing FIGURE 4 with FIGURE 5, it is interesting to observe the difference in thermal behavior of the two die attach materials. For DA2, as curing starts, the weight loss rate becomes slower, while the weight loss rate for DA3 accelerates as curing starts. It is very likely that the outgassing species in DA2 is reactive diluent, which has a lower weight loss rate when the reaction starts. But for DA3, outgassing is a non-reactive solvent, and possibly with other reactive species. The non-reactive solvent has a boiling point at 172.9oC, as verified in the DSC. Heat generated in the curing process accelerates evaporation of the solvent. The continuous, slow release outgassing amount during ramp and curing at 180oC explains the formation of small voids in DA2, while fast evaporation of solvent accounts for large voids in DA3. To reduce or eliminate voids during thermal curing, a simple and the most common approach is to use a two-step (or multi- step)cure.Thefirststepisdesignedtoremovevolatiles, followed by a second step of curing. With the first step at 120oC for 1h to remove more volatiles, DA2 shows significantly less voids for a die size of 6.4mm x 6.4mm [FIGURE 3(h)].

Ideally, the majority (if not all) of volatiles should be removed prior to the gelation point, which is defined as the intersection of G’ and G’’ in a rheological test. Because the viscosity of die attach, materials increases dramatically after their gelation point. A higher amount of volatiles released after gelation point (or later stage of curing) are more likely to form voids. Therefore, the combined characterization of TGA and DSC, as well as rheological test, provides a good guideline to design optimal curing profiles to minimize or eliminate voids.

Summary

This article provides an understanding of void impact in die attach joints, the techniques to detect voids, voiding mechanisms, and their corresponding solutions. To eliminate voids, it is important to control the process to minimize moisture absorption and optimize a curing profile for die attach materials. TGA, DSC and Rheometer are key analytical tools to optimize a curing profile to prevent voiding. In addition, many other properties such as modulus, coefficient of thermal expansion (CTE), and adhesion need to be considered when optimizing curing profiles. Last but not least, it is crucial to develop die attach materials with less outgassing and moisture absorption without compro- mising manufacturability, reliability and performance.

References

1. R.W.Zhang,etal., “Solving delamination in lead frame-based packages,” Chip Scale Review, 2015, pp. 44-48.
2. L. Angrisani, et al., “Detection and location of defects in electronic devices by means of scanning ultrasonic microcopy and the wavelet transform,” Measurement, 2002, Vol. 31, pp. 77-91.
3. D. Wyatt, et al., “Method for reducing freeze-thaw voids in uncured adhesives,” 2006 US 11/402,170.
4. Y. Q. Su, et al., “Effect of transfer pressure on die attach film void perfor- mance,” 2009 IEEE 11th Electronic Packaging Technology Conference, pp. 754-757.

RONGWEI ZHANG is a Packaging Engineer, and VIKAS GUPTA is an Engineering Manager, Semiconductor Packaging, Texas Instruments Inc., Dallas, TX.

Directly converting electrical power to heat is easy. It regularly happens in your toaster, that is, if you make toast regularly. The opposite, converting heat into electrical power, isn’t so easy.

Researchers from Sandia National Laboratories have developed a tiny silicon-based device that can harness what was previously called waste heat and turn it into DC power. Their advance was recently published in Physical Review Applied.

This tiny silicon-based device developed at Sandia National Laboratories can catch and convert waste heat into electrical power. The rectenna, short for rectifying antenna, is made of common aluminum, silicon and silicon dioxide using standard processes from the integrated circuit industry. Credit: Photo by Randy Montoya/Sandia National Laboratories

“We have developed a new method for essentially recovering energy from waste heat. Car engines produce a lot of heat and that heat is just waste, right? So imagine if you could convert that engine heat into electrical power for a hybrid car. This is the first step in that direction, but much more work needs to be done,” said Paul Davids, a physicist and the principal investigator for the study.

“In the short term we’re looking to make a compact infrared power supply, perhaps to replace radioisotope thermoelectric generators.” Called RTGs, the generators are used for such tasks as powering sensors for space missions that don’t get enough direct sunlight to power solar panels.

Davids’ device is made of common and abundant materials, such as aluminum, silicon and silicon dioxide — or glass — combined in very uncommon ways.

Silicon device catches, channels and converts heat into power

Smaller than a pinkie nail, the device is about 1/8 inch by 1/8 inch, half as thick as a dime and metallically shiny. The top is aluminum that is etched with stripes roughly 20 times smaller than the width of a human hair. This pattern, though far too small to be seen by eye, serves as an antenna to catch the infrared radiation.

Between the aluminum top and the silicon bottom is a very thin layer of silicon dioxide. This layer is about 20 silicon atoms thick, or 16,000 times thinner than a human hair. The patterned and etched aluminum antenna channels the infrared radiation into this thin layer.

The infrared radiation trapped in the silicon dioxide creates very fast electrical oscillations, about 50 trillion times a second. This pushes electrons back and forth between the aluminum and the silicon in an asymmetric manner. This process, called rectification, generates net DC electrical current.

The team calls its device an infrared rectenna, a portmanteau of rectifying antenna. It is a solid-state device with no moving parts to jam, bend or break, and doesn’t have to directly touch the heat source, which can cause thermal stress.

Infrared rectenna production uses common, scalable processes

Because the team makes the infrared rectenna with the same processes used by the integrated circuit industry, it’s readily scalable, said Joshua Shank, electrical engineer and the paper’s first author, who tested the devices and modeled the underlying physics while he was a Sandia postdoctoral fellow.

He added, “We’ve deliberately focused on common materials and processes that are scalable. In theory, any commercial integrated circuit fabrication facility could make these rectennas.”

That isn’t to say creating the current device was easy. Rob Jarecki, the fabrication engineer who led process development, said, “There’s immense complexity under the hood and the devices require all kinds of processing tricks to build them.”

One of the biggest fabrication challenges was inserting small amounts of other elements into the silicon, or doping it, so that it would reflect infrared light like a metal, said Jarecki. “Typically you don’t dope silicon to death, you don’t try to turn it into a metal, because you have metals for that. In this case we needed it doped as much as possible without wrecking the material.”

The devices were made at Sandia’s Microsystems Engineering, Science and Applications Complex. The team has been issued a patent for the infrared rectenna and have filed several additional patents.

The version of the infrared rectenna the team reported in Physical Review Applied produces 8 nanowatts of power per square centimeter from a specialized heat lamp at 840 degrees. For context, a typical solar-powered calculator uses about 5 microwatts, so they would need a sheet of infrared rectennas slightly larger than a standard piece of paper to power a calculator. So, the team has many ideas for future improvements to make the infrared rectenna more efficient.

Future work to improve infrared rectenna efficiency

These ideas include making the rectenna’s top pattern 2D x’s instead of 1D stripes, in order to absorb infrared light over all polarizations; redesigning the rectifying layer to be a full-wave rectifier instead of the current half-wave rectifier; and making the infrared rectenna on a thinner silicon wafer to minimize power loss due to resistance.

Through improved design and greater conversion efficiency, the power output per unit area will increase. Davids thinks that within five years, the infrared rectenna may be a good alternative to RTGs for compact power supplies.

Shank said, “We need to continue to improve in order to be comparable to RTGs, but the rectennas will be useful for any application where you need something to work reliably for a long time and where you can’t go in and just change the battery. However, we’re not going to be an alternative for solar panels as a source of grid-scale power, at least not in the near term.”

Davids added, “We’ve been whittling away at the problem and now we’re beginning to get to the point where we’re seeing relatively large gains in power conversion, and I think that there’s a path forward as an alternative to thermoelectrics. It feels good to get to this point. It would be great if we could scale it up and change the world.”

By Paula Doe

Chip testing is becoming smarter and more complex, creating growing requirements to stream data in real time and ensure it is ready to use for analysis, regardless of the vendor source.

Adaptive testing using machine learning to predict die performance in a downstream test can reduce the number of cycles by as much as 40 per cent without compromising test performance, notes Dan Sebban, VP of data analysis, OptimalPlus, who’ll speak on machine learning challenges at SEMICON West’s Test Vision 2020 program. “As devices and their test requirements grow in complexity, the motivation for automating adaptive test greatly increases,” he states, adding that characteristics such as die location on the wafer, defects on neighboring die, condition of the tester, and test values near the specification limits can help predict which die are likely to be good.

“The big issue we see is that while everyone likes the idea of machine learning, it remains a black box model, with little visibility into why it makes the decisions it does,” adds Sebban. In addition, a suitable infrastructure to run, deploy and assess a machine learning model in real time is required. “There is still some hesitation to adopt machine learning. It’s a big change of mindset. While building the confidence to use machine learning will take time and experience, using the technology to automate big data analysis with the relevant infrastructure may be our best alternative to reduce test cost.”

Systems test and parts-per-billion quality become the rule

Systems test will continue to become more prominent and more complex as chips and packages shrink, affirms Stacy Ajouri, Texas Instruments system integration engineer and Test Vision 2020 event chair. “Even IC makers now need to start doing more systems test.” And as more ICs are used in automotive applications, the distinction between consumer and automotive requirements is blurring, driving demand in other markets for higher precision test with parts-per-billion defectivity requirements.

“Intelligent test gets increasingly challenging as devices become more complex and as testing moves from distinguishing good from bad devices to figuring out how to repair and trim marginal devices to make them good,” adds Derek Floyd, Advantest director of business development, this year’s program chair.

“We’re highlighting efforts to create the infrastructure the industry needs to manage big data for machine learning with test platforms from different vendors,” says Ajouri, citing work on new standards for streaming data from the testers and labeling critical steps in consistent language to simplify the use of data from different platforms in real time. “I have 10 platforms from multiple vendors, and I need them to mean exactly the same thing by ‘lot’ so I don’t have to sort it out before I can use the data,” she says.

Are devices becoming too complicated to test at the required price point?

Can testing be economical with up to a million die per wafer, 50 data points per die, a requirement for parts-per-billion accuracy, and the need to identify parts that test good now but that might fail in the future? Organizers of the event invite chipmakers and test suppliers to debate the issue. “The speed of innovation in the semiconductor industry challenges test to keep pace,” notes Floyd. “The product we’re testing is always ahead of the product we have to test it with.”

The two-day event features sessions on automotive test; big data and machine learning for adaptive test; handling and interface issues such as over-the-air testing;  and a general session covering memory and RF test.

Researchers at Kyushu University’s Center for Organic Photonics and Electronics Research (OPERA) in Japan have demonstrated a way to split energy in organic light-emitting diodes (OLEDs) and surpass the 100% limit for exciton production, opening a promising new route for creating low-cost and high-intensity near-infrared light sources for sensing and communications applications.

OLEDs use layers of carbon-containing organic molecules to convert electrical charges into light. In normal OLEDs, one positive charge and one negative charge come together on a molecule to form a packet of energy called an exciton. One exciton can release its energy to create at most one beam of light, or photon.

Illustration of the singlet fission process used to boost the number of excitons in an OLED and break the 100 percent limit for exciton production efficiency. The emitting layer consists of a mixture of rubrene molecules, which are responsible for singlet fission, and ErQ3 molecules, which produce the emission. A singlet exciton, which is created when a positive charge and a negative charge combine on a rubrene molecule, can transfer half of its energy to a second rubrene molecule through the process of singlet fission, resulting in two triplet excitons. The triplet excitons then transfer to ErQ3 molecules, and the exciton energy is released as near-infrared emission by ErQ3. Credit: William J. Potscavage Jr.

When all charges form excitons that emit light, a maximum 100% internal quantum efficiency is achieved. However, the new technology uses a process called singlet fission to split the energy from an exciton into two, making it possible to exceed the 100% limit for the efficiency of converting charge pairs into excitons, also known as the exciton production efficiency

“Put simply, we incorporated molecules that act as change machines for excitons in OLEDs. Similar to a change machine that converts a $10 bill into two $5 bills, the molecules convert an expensive, high-energy exciton into two half-price, low-energy excitons,” explains Hajime Nakanotani, associate professor at Kyushu University and co-author of the paper describing the new results.

Excitons come in two forms, singlets and triplets, and molecules can only receive singlets or triplets with certain energies. The researchers overcame the limit of one exciton per one pair of charges by using molecules that can accept a triplet exciton with an energy that is half the energy of the molecule’s singlet exciton.

In such molecules, the singlet can transfer half of its energy to a neighboring molecule while keeping half of the energy for itself, resulting in the creation of two triplets from one singlet. This process is called singlet fission.

The triplet excitons are then transferred to a second type of molecule that uses the energy to emit near-infrared light. In the present work, the researchers were able to convert the charge pairs into 100.8% triplets, indicating that 100% is no longer the limit. This is the first report of an OLED using singlet fission, though it has previously been observed in organic solar cells.

Furthermore, the researchers could easily evaluate the singlet fission efficiency, which is often difficult to estimate, based on comparison of the near-infrared emission and trace amounts of visible emission from remaining singlets when the device is exposed to various magnetic fields.

“Near-infrared light plays a key role in biological and medical applications along with communications technologies,” says Chihaya Adachi, director of OPERA. “Now that we know singlet fission can be used in an OLED, we have a new path to potentially overcome the challenge of creating an efficient near-infrared OLED, which would find immediate practical use.”

Overall efficiency is still relatively low in this early work because near-infrared emission from organic emitters is traditionally inefficient, and energy efficiency will, of course, always be limited to a maximum 100%. Nonetheless, this new method offers a way to increase efficiency and intensity without changing the emitter molecule, and the researchers are also looking into improving the emitter molecules themselves.

With further improvements, the researchers hope to get the exciton production efficiency up to 125%, which would be the next limit since electrical operation naturally leads to 25% singlets and 75% triplets. After that, they are considering ideas to convert triplets into singlets and possibly reach a quantum efficiency of 200%.

There are limits to how accurately you can measure things. Think of an X-ray image: it is likely quite blurry and something only an expert physician can interpret properly. The contrast between different tissues is rather poor but could be improved by longer exposure times, higher intensity, or by taking several images and overlapping them. But there are considerable limitations: humans can safely be exposed to only so much radiation, and imaging takes time and resources.

A well-established rule of thumb is the so-called standard quantum limit: the precision of the measurement scales inversely with the square root of available resources. In other words, the more resources – time, radiation power, number of images, etc. – you throw in, the more accurate your measurement will be. This will, however, only get you so far: extreme precision also means using excessive resources.

A team of researchers from Aalto University, ETH Zurich, and MIPT and Landau Institute in Moscow have pushed the envelope and came up with a way to measure magnetic fields using a quantum system – with accuracy beyond the standard quantum limit.

The detection of magnetic fields is important in a variety of fields, from geological prospecting to imaging brain activity. The researchers believe that their work is a first step towards of using quantum-enhanced methods for sensor technology.

‘We wanted to design a highly efficient but minimally invasive measurement technique. Imagine, for example, extremely sensitive samples: we have to either use as low intensities as possible to observe the samples or push the measurement time to a minimum,’ explains Sorin Paraoanu, leader of the Kvantti research group at Aalto University.

Their paper, published in the prestigious journal npj Quantum Information shows how to improve the accuracy of magnetic field measurements by exploiting the coherence of a superconducting artificial atom, a qubit. It is a tiny device made of overlapping strips of aluminium evaporated on a silicon chip – a technology similar to the one used to fabricate the processors of mobile phones and computers.

When the device is cooled to a very low temperature, magic happens: the electrical current flows in it without any resistance and starts to display quantum mechanical properties similar to those of real atoms. When irradiated with a microwave pulse – not unlike the ones in household microwave ovens – the state of the artificial atom changes. It turns out that this change depends on the external magnetic field applied: measure the atom and you will figure out the magnetic field.

But to surpass the standard quantum limit, yet another trick had to be performed using a technique similar to a widely-applied branch of machine learning, pattern recognition.

‘We use an adaptive technique: first, we perform a measurement, and then, depending on the result, we let our pattern recognition algorithm decide how to change a control parameter in the next step in order to achieve the fastest estimation of the magnetic field,’ explains Andrey Lebedev, corresponding author from ETH Zurich, now at MIPT in Moscow.

‘This is a nice example of quantum technology at work: by combining a quantum phenomenon with a measurement technique based on supervised machine learning, we can enhance the sensitivity of magnetic field detectors to a realm that clearly breaks the standard quantum limit,’ Lebedev says.

A Tokyo Institute of Technology research team has shown copper nitride acts as an n-type semiconductor, with p-type conduction provided by fluorine doping, utilizing a unique nitriding technique applicable for mass production and a computational search for appropriate doping elements, as well as atomically resolved microscopy and electronic structure analysis using synchrotron radiation. These n-type and p-type copper nitride semiconductors could potentially replace the conventional toxic or rare materials in photovoltaic cells.

Thin film photovoltaics have equivalent efficiency and can cut the cost of materials compared to market-dominating silicon solar panels. Utilizing the photovoltaic effect, thin layers of specific p-type and n-type materials are sandwiched together to produce electricity from sunlight. The technology promises a brighter future for solar energy, allowing low-cost and scalable manufacturing routes compared to crystalline silicon technology, even though toxic and rare materials are used in commercialized thin film solar cells. A Tokyo Institute of Technology team has challenged to find a new candidate material for producing cleaner, cheaper thin film photovoltaics.

(a) This is a copper and Copper Nitride. (b) Theoretical Calculation for P-type and N-type Copper Nitride. (c) Direct Observation of Fluorine Position in Fluorine-doped Copper Nitride. (a) An image of thin film copper plates before and after reacting with ammonia and oxygen. Copper metal has been transformed to copper nitride. (b) Copper insertion for an n-type semiconductor and fluorine insertion for a p-type semiconductor. (c) Nitrogen plotted in red, fluorine in green, and copper in blue. Fluorine is located at the open space of the crystal as predicted by the theoretical calculation. Credit: Advanced Materials

They have focused on a simple binary compound, copper nitride that is composed of environmentally friendly elements. However, growing a nitride crystal in a high quality form is challenging as history tells us to develop gallium nitride blue LEDs. Matsuzaki and his coworkers have overcome the difficulty by introducing a novel catalytic reaction route using ammonia and oxidant gas. This compound, pictured through the photograph in figure (a), is an n-type conductor that has excess electrons. On the other hand, by inserting fluorine element in the open space of the crystal, they found this n-type compound transformed into p-type as predicted by theoretical calculations and directly proven by atomically resolved microscopy in figures (b) and (c), respectively.

All existing thin film photovoltaics require a p-type or n-type partner in their makeup of a sandwich structure, requiring huge efforts to find the best combination. P-type and n-type conduction in the same material developed by Matsuzaki and his coworkers are beneficial to design a highly efficient solar cell structure without such efforts. This material is non-toxic, abundant, and therefore potentially cheap–ideal replacements for in use cadmium telluride and copper indium gallium diselenide thin film solar cells. With the development of these p-type and n-type semiconductors, in a scalable forming technique using simple safe and abundant elements, the positive qualities will further bring thin film technology into the light.

EV Group (EVG), a supplier of wafer bonding and lithography equipment for the MEMS, nanotechnology and semiconductor markets, today unveiled the new SmartView® NT3 aligner, which is available on the company’s industry benchmark GEMINI® FB XT integrated fusion bonding system for high-volume manufacturing (HVM) applications. Developed specifically for fusion and hybrid wafer bonding, the SmartView NT3 aligner provides sub-50-nm wafer-to-wafer alignment accuracy — a 2-3X improvement — as well as significantly higher throughput (up to 20 wafers per hour) compared to the previous-generation platform.

With the new SmartView NT3 aligner, the GEMINI FB XT provides integrated device manufacturers, foundries and outsourced semiconductor assembly and test providers (OSATs) with wafer bonding performance that is unmatched in the industry and can meet their future 3D-IC packaging requirements. Applications enabled by the enhanced GEMINI FB XT include memory stacking, 3D systems on chip (SoC), backside illuminated CMOS image sensor stacking, and die partitioning.

The new SmartView® NT3 aligner on EV Group’s GEMINI® FB XT fusion bonder enables a 2-3X improvement in wafer-to-wafer alignment accuracy over EVG’s previous-generation aligner.

Wafer Bonding an Enabling Process for 3D Device Stacking

Vertical stacking of semiconductor devices has become an increasingly viable approach to enabling continuous improvements in device density and performance. Wafer-to-wafer bonding is an essential process step to enable 3D stacked devices. However, tight alignment and overlay accuracy between the wafers is required to achieve good electrical contact between the interconnected devices on the bonded wafers, as well as to minimize the interconnect area at the bond interface so that more space can be made available on the wafer for producing devices. The constant reduction in pitches that are needed to support component roadmaps is fueling tighter wafer-to-wafer bonding specifications with each new product generation.

“At imec, we believe in the power of 3D technology to create new opportunities and possibilities for the semiconductor industry, and we are devoting a great deal of energy into improving it,” stated Eric Beyne, imec fellow and program director 3D system integration. “One area of particular focus is wafer-to-wafer bonding, where we are achieving excellent results in part through our work with industry partners such as EV Group. Last year, we succeeded in reducing the distance between the chip connections, or pitch, in hybrid wafer-to-wafer bonding to 1.4 microns, which is four times smaller than the current standard pitch in the industry. This year we are working to reduce the pitch by at least half again.”

“EVG’s GEMINI FB XT fusion bonding system has consistently led the industry in not only meeting but exceeding performance requirements for advanced packaging applications, with key overlay accuracy milestones achieved with several industry partners within the last year alone,” stated Paul Lindner, executive technology director, EV Group. “With the new SmartView NT3 aligner specifically engineered for the direct bonding market and added to our widely adopted GEMINI FB XT fusion bonder, EVG once again redefines what is possible in wafer bonding — helping the industry to continue to push the envelope in enabling stacked devices with increasing density and performance, lower power consumption and smaller footprint.”

The GEMINI FB XT fusion bonder with new SmartView NT3 aligner is available for customer demonstrations and testing. More information on the product can be found on EVG’s website at https://www.evgroup.com/en/products/bonding/integrated_bonding/geminifb/.

EVG will showcase the GEMINI FB XT with new SmartView NT3 aligner, along with its complete suite of wafer bonding, lithography and resist processing solutions for advanced packaging applications, at SEMICON West, to be held July 10-12 at the Moscone Convention Center in San Francisco, Calif. Attendees interested in learning more can visit EVG at Booth #623 in the South Hall.

In addition, Dr. Thomas Uhrmann, director of business development at EV Group, will highlight the GEMINI FB XT and other developments in wafer bonding in his presentation “Collective Bonding for Heterogeneous Integration in Advanced Packaging” at the Meet the Experts Theater Smart Manufacturing Pavilion at SEMICON West on Thursday, July 12 from 3:00-3:30 p.m. in the South Hall.