Tag Archives: letter-wafer-tech

At this week’s IEEE IEDM conference, nano-electronics research center imec showed for the first time the integration of high mobility InGaAs as a channel material for 3D vertical NAND memory devices formed in the plug (holes) with the diameter down to 45nm. The new channel material improves transconductance (gm) and read current which is crucial to enable further VNAND cost reduction by adding additional layers in 3D vertical architecture.

Non-volatile 3D NAND flash memory technology is used to overcome the scaling issues in conventional planar NAND flash memory technology, suffering from severe cell to cell interferences and read noise due to aggressively scaled dimensions. However, current 3D NAND devices, featuring a poly-Si channel, are characterized by drive current that will linearly decrease with the number of memory layers, which is not sustainable for long-term scaling. This is because the conduction in the poly-silicon channel material is ruled by grain size distribution and hampered by scattering at the grain boundaries and charged defects.

To boost the drive current in the channel, imec replaced the poly-Si channel material with InGaAs through a gate first-channel last approach. The channel was formed by metal organic vapor phase epitaxy (MOVPE) showing good III-V growth selectivity to silicon and holes filling properties down to 45nm. The resulting III-V devices proved to outperform the poly-Si devices in terms of on-state current (ION) and transconductance (gm), without degrading memory characteristics such as programming, erase and endurance.

“We are extremely pleased with these results, as they provide critical knowledge of Flash memory operations with a III-V channel as well as of the III-V interface with the memory stack,” stated An Steegen, Senior Vice president Process Technology at imec. “While these results are shown on full channels, they are an important stepping stone to develop industry-compatible macaroni-type III V channels.”

Imec’s research into advanced memory is performed in cooperation with imec’s key partners in its core CMOS programs including Samsung, Micron-Intel, Toshiba-Sandisk, SK Hynix, TSMC, GlobalFoundries.

Typical ID-VG. In0.6Ga0.4As presents improved ID-VG characteristic. Ion/Ioff ratio of 3 order of magnitude is sufficient for typical NAND operation

Typical ID-VG. In0.6Ga0.4As presents improved ID-VG characteristic. Ion/Ioff ratio of 3 order of magnitude is sufficient for typical NAND operation

Due to the further scaling and increasing complexity of transistors, the boundaries between back-end-of-line and front-end-of-line reliability research are gradually fading. Imec’s team leaders Kristof Croes and Dimitri Linten give their vision on the future of reliability research.

In April 2015, the 53rd edition of the IEEE International Reliability Physics Symposium (IRPS) took place, a top conference where experts in reliability of micro- and nanoelectronics meet. With 16 contributions as either an author or a co-author, imec was prominently present.

Dimitri Linten: “Our contributions to conferences such as IRPS highlight the unique role that imec plays in the field of reliability. And they show the importance of reliability research at imec for the development of new transistor and memory concepts. As scaling continues, a whole range of new technology options is being researched. New materials and architectures with often unknown failure mechanisms are being introduced. Reliability is one of the factors that determine which concept will finally have a chance. For example, one of the options is to replace silicon in the transistor’s channel by germanium or a III/V material since these materials provide a higher charge carrier mobility. But until now, these materials pose important challenges to the reliability of the transistors that are made of these materials. Or, researchers look at introducing either air gaps or ultralow-k materials as spacers between the transistor’s gate and drain in order to keep the capacity as low as possible. The integration of all these new materials is important, but their reliability is crucial as well: reliability before performance.”

Front of line Fig 1

Kristof Croes: “10 years ago, reliability was tested only in a final stage of a technology development. But due to the ever decreasing reliability margins, the reliability is now being tested from the very beginning. And this starts with an understanding of the physics behind the failure, for which we often collaborate with universities. Once we understand the failure of e.g. new materials, we can model our findings and predict the lifetime of the device.”

Front-end-of-line vs back-end-of-line

Traditionally, CMOS process engineers classify the semiconductor process in two main parts: the front-end-of-line (FEOL) and the back-end-of-line (BEOL). The FEOL comprises all the process steps that are related to the transistor itself, including the gate of the transistor. The BEOL comprises all subsequent process steps. In the BEOL, the various transistors are being interconnected through metal lines. The same classification is being used in reliability research. Consequently, FEOL and BEOL reliability is tested independently.

Kristof Croes: “This historical separation is being applied at imec as well, where reliability research within the process technology division is distributed among several groups. One group looks into the reliability of FEOL and memory chips. Another group investigates the BEOL reliability and chip packaging interaction. Today, in BEOL processes, electromigration (the movement of metal atoms as a result of an electric current), stress migration, time dependent dielectric breakdown (TDDB) and thermomechanical stress are the main failure mechanisms. We also look into 3D structures, where the impact and reliability of through-Si vias are important issues. In a 3D-stacked structure, for some applications, the Si wafer needs to be thinned down to about 5 micrometer. And this impacts the reliability. And there are thermal and thermo-mechanical influences related to the assembly of materials with completely different mechanical properties. All these failure mechanisms in the BEOL will become increasingly important for future technology nodes.”

Dimitri Linten: “We look into the time dependent breakdown (TDDB) of the gate stack, and into stress-induced leakage current (SILC) and hot carrier stress (HC). The bias temperature instability (BTI) is important as well, as it causes a shift of the threshold voltage (VT) of the transistor during the lifetime of the circuit. We also investigate memory elements, by testing and modelling the retention and endurance of the memories. ESD or electrostatic discharge is still one of the main important failure mechanisms at the level of the final ICs in a certain technology. In order to intercept the current that is released during an electrostatic discharge, protecting ESD structures are implemented in the FEOL.”

Front of line Fig 2

FEOL and BEOL reliability: fading boundaries

As the dimensions of the transistor shrink, the impact of the FEOL on the BEOL reliability – and vice versa – increases. Kristof Croes: “A well-known example is self-heating in FinFETs. In planar CMOS processes, the heat that is released during the transistor’s operation is dissipated mainly through the Si substrate. But in a FinFET architecture, we have to take into account a higher thermal coupling towards the metal intercon- nects. The FinFETs warm up and heat the metal lines. And this impacts the reliability of the BEOL structure. In 3D technology, we thin down wafers with TSVs. After opening the TSVs, we can stack them on top of another wafer. The integration of the TSVs, the thinning and stacking of the wafers influence both the FEOL and the BEOL performance and reliability.”

Dimitri Linten: “Also the introduction of new architectures brings the reliability of FEOL and BEOL closer to each other. Think about vertical nanowires, potential successors of the FinFET because they promise a better electro- static channel control. One of the challenges in terms of reliability is to provide these structures with an ESD protection. While in more conventional structures, the FEOL is most sensitive to electrostatic discharge, the impact of electrostatic discharge on the BEOL becomes critical in vertical nanowires. In these 3D structures, we have to connect all the vertical nanowires through local interconnects and interconnects that will be located very close to each other. And these interconnects will put other requirements to the ESD protection circuit than we are used to. A possible solution is to consider a 3D stacking of ESD protection circuits on top of the transistor architecture.”

Another consequence of further scaling is an increase in the variability of the transistor parameters. In FEOL, variability is a well-known phenomenon.

Dimitri Linten: “Time dependent variability of BTI is a relatively new challenge for reliability research. For large transistors – the older generations – BTI translates into an average shift of the circuit’s threshold voltage of e.g. 50mV, the spec for BTI. But upon further scaling, there is no average shift anymore. Instead, there will be a static distribution of shifts. The variability becomes time dependent and the lifetime of the circuits will be spread. The imec FEOL reliability group is a world leader in this domain: we have developed a defect centric BTI model that has been adopted by market leaders in the semiconductor industry. On time dependent BTI, we closely collaborate with the design group in order to develop methodologies that take into account the time dependence.”

Kristof Croes: “Also in BEOL, variability becomes increasingly important. Think about via misalignment or line edge roughness of increasingly smaller metal lines. These issues degrade the reliability and the lifetime of the BEOL. To deal with the increasing variability, a powerful statistical toolbox is required. And this toolbox can be deployed for BEOL as well as for FEOL reliability research.”

Front of line Fig 3

When BEOL meets FEOL reliability

As dimensions are shrinking, the boundaries between FEOL and BEOL reliability are gradually fading.

Kristof Croes: “We are convinced that we should optimally attune the activities and tools used for reliability research. We have to bring the people from BEOL and FEOL reliability closer together. And we want to unite the researchers outside these groups that work on reliability. Reliability is a field of expertise and sharing problems often provides part of the solution. For future technology nodes and for develop- ments beyond scaling, this will increase the operational efficiency of reliability research. To strengthen this idea, we will organize an internal workshop at imec on September 4, with the help of our predecessors and colleagues Guido Groeseneken and Ingrid De Wolf. This will help our researchers to gain more insight into each other’s work and into the tools they use. Hopefully, this idea will be adopted outside of imec as well.”

Additional reading

Technical program of the 2015 IRPS conference with abstracts (http://www.irps.org/program/technical-program/15-program.pdf)

“New test allows to visualize in real-time crack formation of BEOL,” March issue of imec magazine (http://magazine.imec.be/data/57/ reader/reader.html#preferred/1/package/57/pub/63/page/2)

Feed-forward can be applied for controlling overlay error by using Coherent Gradient Sensing (CGS) data to reveal correlations between displacement variation and overlay variation.

BY DOUG ANBERG and DAVID M. OWEN, Ultratech, San Jose, CA

As the semiconductor industry is fast approaching 10nm design rules, there are many difficulties with process integration and device yield. Lithography process control is expected to be a major challenge requiring overlay control to a few nanometers. There are many factors that impact the overlay budget that can be broadly categorized as those arising from the reticle, the lithography tool and wafer processing. Typically, overlay budget components associated with the reticle and lithography tool can be characterized and are relatively stable. However, as published elsewhere, process-based sources of surface displacement can contribute to the lithography overlay budget, independent of the lithography process (e.g., etch, anneal, CMP). Wafer-shape measurement can be implemented to characterize process-induced displacements. The displacement information can then be used to monitor specific processes for excursions or be modeled in terms of parameters that can be fed-forward to correct the lithography process for each wafer or lot.

The implementation of displacement feed-forward for overlay control requires several components, including: a) a system capable of making comprehensive surface displacement measurements at high throughput, b) a characterization and understanding of the relationship between displacement and overlay and the corresponding displacement variability, c) a method or system to integrate the displacement information with the lithography control system. The Coherent Gradient Sensing (CGS)technique facilitates the generation of high-density displacement maps (>3 million points on 300mm wafers) such that distortions and stresses induced shot-by-shot and process-by-process can be tracked in detail. This article will demonstrate how feed forward can be applied for controlling overlay error by using CGS data to reveal correlations between displacement variation and overlay variation.

High-speed, full-wafer data collection

Historically, patterned wafer surface inspection was limited to monitoring topography variations within the die area and across the wafer with the use of point-by-point measurements with low throughput, typically limiting measurements to off-line process development. Surface inspection of patterned wafers involving transparent films (e.g. SiO2 deposited films) was typically further limited to contact techniques such as stylus profilometry.

With CGS interferometry, a high-resolution front-surface topography map of a full 300 mm patterned wafer can be obtained for product wafers with an inspection time of a few seconds. Transparent films can typically be measured successfully without opaque capping layers due to the self-referencing attribute of the CGS interferometer. Essentially, CGS technology compares the relative heights of two points on the wafer surface that are separated by a fixed distance. Physically, the change in height over a fixed distance provides slope or tilt information and the fringes in a CGS interference pattern are contours of constant slope. In order to reconstruct the shape of the surface under investigation, interference data in two orthogonal directions must be collected. The slope data derived from the interference patterns is integrated numerically to generate the surface shape or topography. In-plane surface displacements in the x- and y-directions can then be computed from the surface topography using fundamentals of plate theory (FIGURE 1).

Fig 1-a Fig 1-b Fig 1-c

FIGURE 1. Example of the analysis of the uniform and non-uniform stress components of the displacement field: (a) total displacement computed from the x-direction slope, (b) uniform stress component of the displacement field determined from the best-fit plane to the data in (a), (c) non- uniform stress component of the displacement field.

To best utilize the capabilities of CGS technology for determining stress-induced displacement impacting critical layer overlay budgets, a “Post minus Pre” inspection strategy is typically employed, where two measurements of a wafer are taken: one prior to the process step or module of interest (the pre-process map), and a second measurement is taken on the same wafer after completing the process step or module (the post-process map). The pre-process topography map is then mathematically subtracted from the post-process topography map, providing detailed, high resolution information about the topography variation in the process step or module of interest. A series of topography maps illustrating the “Post minus Pre” process is shown in FIGURE 2.

FIGURE 2. Example of “Post minus Pre” topography CGS measurement.

FIGURE 2. Example of “Post minus Pre” topography CGS measurement.

The surface displacements directly impact the relative position of all points on the wafer surface, leading to potential alignment errors across the wafer at the lithography step. By measuring the evolution of process-induced stresses and displacement across multiple steps in a process flow, the overlay error due to the accumulated stress changes from those process steps can be evaluated, and the cumulative displacement can be calculated. The displacement error can then be fed forward to the lithography tool for improved overlay correction during the exposure process.

In the simplest implementation of this approach, the pre-process or reference measurement would be made following the prior lithography step, whereas the post- processing measurement would be made just before the lithography step of interest. In this manner, the total displacement induced between two lithography steps can be characterized and provided to the lithography system for overlay correction.

Stress and displacement process fingerprinting

By using CGS-based inspection to generate full-wafer topography, displacement and stress, detailed information can be provided for both off-line process monitoring (SPC), or in-line, real-time monitoring (APC) of process steps with significant process induced stress and displacement. A key consequence of the monitoring flexibility afforded by the measurement is the ability to characterize and compare within- wafer displacement and stress fingerprints of individual process chambers in a manufacturing line.

Target-based overlay metrology systems have historically been used as the only metrology tool to measure overlay error at critical lithography layers. Overlay data from the target-based overlay tools is collected after the wafer exposure step and is fed-backward to correct for the measured overlay error for subsequent wafers. As process- induced displacement errors are becoming a significant percentage of the layer-to-layer overlay budget, this post processing feed-back approach for overlay correction may not be sufficient to meet critical layer overlay specifications. Furthermore, overlay errors are often larger near the edge of the wafer where traditional overlay metrology target densities are typically low, providing only limited data for overlay correction.

The implementation of displacement feed-forward overlay correction can be
used to account for wafer-to-wafer and within-wafer distortions prior to lithography. The displacements can be characterized using an appropriate model and the model coefficients, or correctables, can be provided to the lithography tool for adjustment and control on a wafer-by-wafer basis. As shown in FIGURE 3, the CGS technique has the additional advantage of providing high-data density near the edge of the wafer (typically > 75,000 data points beyond 145 mm, sub-sampled in the Fig. 3 vector map for clarity), such that more accurate corrections can be determined where the overlay errors tend to be largest. As a result, lithography rework can be reduced and productivity increased. Case studies have revealed that a significant improvement in overlay can be achieved using this approach.

FIGURE 3. Vector displacement map showing process-induced edge distortion.

FIGURE 3. Vector displacement map showing process-induced edge distortion.

For each critical lithography step, a correlation is typically generated by comparing the traditional overlay measurement tool results to the surface displacement measured by the CGS measurement tool. Recognizing that displacement is only one component of the total overlay measurement, correlation of overlay to displacement requires effort to model or characterize the non-displacement components of the measured overlay. As a result, the appropriate correlation is derived by comparing total overlay to displacement plus the non-displacement overlay sources.

FIGURE 4 shows plots of total overlay versus displacement plus modeled non-displacement overlay sources for multiple locations on a single wafer processed in a leading-edge device flow. Figure 4a shows the x-direction data, whereas Fig. 4b shows the y-direction data. The data is presented in arbitrary units, however the same reference value in nanometers was used to normalize each set of data. The displacement data was evaluated at the same locations as the overlay target positions. For both the x-direction and y-direction data, the point-to-point correlation indicates good correlation with the correlation coefficients of 0.70 and 0.76, respec- tively. The RMS of the residuals of the linear fit to each data set are on the order of 1.5 to 2.0 nm.

Fig 4a

Fig 4b

FIGURE 4. Within-wafer (point-to-point) correlation of conventional overlay data and displacement data for the (a) x-direction and (b) y-direction.

FIGURE 5 similarly shows the wafer-to-wafer variation for overlay and displacement for the x-direction (Fig. 5a) and y-direction (Fig. 5b). The data in Fig. 5 are from multiple lots for the same lithography process evaluated to generate the data in Fig. 4. As with the point-to-point data, the wafer-to-wafer data shows strong correlation with correlation coefficients of 0.94 and 0.90 for the x-direction and y-direction, respectively.

Fig 5a Fig 5b

FIGURE 5.Wafer-level correlation between conventional overlay, |mean| + 3 sigma and displacement, |mean| + 3 sigma for a leading-edge process in the (a) x-direction and (b) y-direction.

The data in Figs. 4 and 5 illustrate key points regarding the correlation of overlay to displacement. First, the inherent variability of an advanced lithography process is typically on the order of 1 to 2nm. As a result, it is reasonable to conclude that the most of the scatter shown in Fig. 4 is likely associated with the variability in non-displacement sources of overlay variation. Second, the modeling or empirical characterization of non-displacement overlay sources is useful to the extent to which those non-displacement sources are constant. Consequently, if such modeling is part of the displacement feed-forward scheme in an effort to predict overlay, the model must account for known variations in the lithography process. A simple example is varia- tions in overlay performance due to differences between lithography chucks.

Displacement feed forward

It has been shown elsewhere that stress induced displacement can account for a significant fraction of the overlay error for certain critical layers at the 40nm node and below. It is therefore critical to develop the tools necessary for utilizing the measured displacement data for real-time in-line feed forward overlay correction to the scanner. One approach to this solution is to develop a system that allows the user to define the level of correction to be applied to the scanner for each lot, wafer or within-wafer zone.

FIGURE 6 shows a simplified schematic for a combined displacement feed-forward and image placement error feed-back approach. Once the process induced displacement for a specific set of process steps has been measured and correlated to overlay error, the measured displacement can be “fed forward” to the scanner in combination with traditional image placement error feedback techniques to further improve critical layer scanner overlay results. This approach is currently being implemented in leading-edge memory fabs to further reduce overlay errors on critical lithography levels and improve overall device yield.

Summary

The measurement of process-induced surface displacement can be an effective part of the overlay control strategy for critical layers at leading edge process nodes. CGS technology provides a method to comprehensively measure these displacements at any point in the process flow. Using a full-wafer interferometer, this system measures the patterned wafer surface in a few seconds and provides a map with up to 3,000,000 data points. This enables 100% in-line monitoring of individual wafers for in-situ stress and process induced surface displacement measurements. Its self-referencing interferometer allows the inspection to be made on any type of surface or films stack, and does not require a measurement target. This capability is currently being employed in numerous leading-edge memory and logic processes.

DOUG ANBERG currently serves as Ultratech’s Vice President of Advanced Lithography Applications; DAVID M. OWEN has been the Chief Technologist for Surface Inspection at Ultratech since 2006. Prior to joining Ultratech, Dr. Owen spent nearly a decade as a research scientist at the California Institute of Technology (Caltech) in Pasadena, and was the Founder and Chief Technology Officer for Oraxion Diagnostics.

At this week’s IEEE International Electron Devices Meeting 2015, nano-electronics research center imec presented breakthrough results to increase performance and improve reliability of deeply scaled silicon CMOS logic devices.

Continued transistor scaling has resulted in increased transistor performance and transistor densities for the last 50 years. With transistor scaling reaching the critical limits of atomic dimensions, imec’s R&D program on advanced logic scaling targets the new and mounting challenges for performance, power, cost, and density scaling to future process technologies. Imec is looking into extending silicon CMOS technology by tackling the detrimental impact of parasitics on device performance and reliability, and by introducing novel architectures such as gate-all-around nanowires, that are considered to improve short channel control.

One of the achievements is a record low contact resistivity of 1.5 Ωcm2 for n-Si that was realized by combing dynamic surface anneal (DSA) to enhance P activation in highly-doped Si:P, with Ge pre-amorphisation and Ti silicidation. Imec also presented a decreased access resistance in NMOS Si bulk finFETs by applying extension doping by phosphorus doped silicate glass (PSG) to achieve damage free and uniform sidewall doping of the fin. Finally, imec introduced junction-less high-k metal-gate-all-around nanowires to improve on- and off-state hot carrier reliability.

“I am extremely proud with the record number of 23 papers that we present at this year’s IEDM2015,” stated Luc Van den hove, President and CEO at imec. “Our presence rewards and confirms our leading position in advanced semiconductor R&D. As much as 10 of the presented papers concerned the different aspects of our advanced logic program. Next to our research efforts to extend silicon CMOS technology into 7nm technology node and beyond. We are looking into beyond silicon CMOS, integrating high mobility materials to increase the channel mobility, and explore new concepts beyond silicon such as spintronics and 2D materials.”

Imec’s research into advanced logic scaling is performed in cooperation with imec’s key partners in its core CMOS programs including GlobalFoundries, Intel, Micron, Panasonic, Qualcomm, Samsung, SK Hynix, Sony and TSMC.

Cross-section of JL nanowires with or without an acceptor type interface, cut along the middle of the gate. The electrostatic potential is asymmetric when a trap is introduced; the squeezed channel improves the electrostatics and the subthreshold slope.

Cross-section of JL nanowires with or without an acceptor type interface, cut along the middle of the gate. The electrostatic potential is asymmetric when a trap is introduced; the squeezed channel improves the electrostatics and the subthreshold slope.

Subtleties in thicknesses between the alternating Cu metal and dielectric layers within a build-up substrate can impact BLR performance.

BY JAIMAL WILLIAMSON, Texas Instruments, Dallas, TX

Managing an organization in an orderly and disciplined manner is known as “running a tight ship.” This mentality and discipline cannot be understated with build-up substrate supplier capability and manufacturing tolerances as it relates reliability and margin in a flip chip ball grid array (FCBGA) device. Build-up substrate technology is the backbone for flip chip packaging due to its ability to bridge high density interconnects and functionality enabling improved electrical performance in tandem with the semiconductor chip. Alternating metal and dielectric layers build up the substrate into the final composite structure. The range of thicknesses of the aforementioned metal and dielectric layers are dependent on associated substrate manufacturer design rules, which can have an impact on board level reliability (BLR). Having a keen awareness of substrate supplier design rules can aid not only troubleshooting, but improve understanding of reliability margin from a chip to package interaction standpoint for any array of commercial and automotive FCBGA applications.

Influence of copper and dielectric layers on reliability

To better understand the thickness variation impact of bottommost substrate copper (Cu) metal (15 +/- 5μm) and dielectric (30 +/-6μm) layers as it relates to strain energy density of BGA solder joint at die shadow area and package corner, a 3×3 factorial design of experiments (DoE) approach (FIGURE 1) was pursued. Through the use of finite elemental modeling, outputs of the study included both strain energy density under -40°C to 125°C and 0°C to 100°C BLR temperature cycle conditions and changes in coefficient of thermal expansion (CTE) as Cu metal and dielectric thicknesses varied. For the remainder of the article, results from the more stringent -40°C to 125°C BLR temperature cycle condition will be discussed.

FIGURE 1. 3x3 factorial DoE.

FIGURE 1. 3×3 factorial DoE.

Rationale of the study was based on a striking difference in BLR performance from two FCBGA daisy chain test vehicles having an identical substrate design, but manufactured at two different substrate suppliers (noted as supplier A and B in this article). The FCBGA daisy chain test vehicle comprises the following package attributes (see FIGURE 2 for a side view example):
• 40mm x 40mm body size
• 8-layer build-up stack (3/2/3)
• 400μm core thickness
• 1mm BGA pitch

FIGURE 2. Example of FCBGA package.

FIGURE 2. Example of FCBGA package.

Weibull analysis was generated from empirical BLR results at 5 percent and 63.2 percent cycles to failure. Specifically, at 5 percent cycles to failure supplier A exhibits ~25 percent reduced BGA solder joint fatigue life than counterparts from supplier B (as illustrated in FIGURES 3 and 4).

FIGURE 3. Weibull plot of supplier A.

FIGURE 3. Weibull plot of supplier A.

FIGURE 4. Weibull plot of supplier B.

FIGURE 4. Weibull plot of supplier B.

In a similar study focusing on component level reliability (CLR), it was observed that bottommost substrate Cu layer thickness can impact stress underneath die shadow area. For these reasons, a more detailed examination was done to measure bottommost substrate Cu layer thickness from daisy chain units of suppliers A and B. Based on package construction analysis, supplier A was found to target the nominal value of 15μm; whereas supplier B targeted the high end of specification at 20μm. These Cu thickness differences would play a significant role in the BLR results.

Stress modeling results

Outputs of the finite elemental modeling are revealed in FIGURE 5 based on inputs from the aforementioned 3×3 factorial DoE illustrated in Fig. 1. Based on the combi- nation of various Cu and dielectric layer thicknesses evaluated, thicker dielectric and Cu layers yield higher macroscopic CTE values. This is an expected trend based on CTE material properties of Cu and dielectric layers in relation to the substrate core material. Simulation results confirmed CTE in ascending order is: dielectric layer > Cu layer > substrate core. Comparing Weibull analysis from supplier A and B (figures 3 and 4), DoE legs 4 and 6 match best, respectively, to the empirical BLR results. In addition, DoE legs 4 and 6 align with the bottommost substrate Cu layer thickness values from the aforemen- tioned package construction analysis measurements. It is noted that based on modeling results, an approximately 2 percent change in CTE can swing the cycles to failure at 63.2 percent by ~11 percent. DoE leg 4 focuses on nominal Cu thickness of 15μm; whereas leg 6 focuses on the high end of the Cu thickness tolerance at 20μm. Dielectric thickness is nominal value of 30μm in both DoE cases. Improved BLR performance from supplier B is attributed to the thicker Cu providing a better CTE match to the BLR test board.

FIGURE 5. Finite elemental modeling results.

FIGURE 5. Finite elemental modeling results.

Use of JMP for statistical perspective

As a supplemental tool for data interpretation, JMP statistical analysis was performed to illustrate how nominal and extreme values of the metal and dielectric layer thickness specification affect FCBGA BLR performance. Analyzing the strain energy data outputs, the model fit well to the predicted values as shown in FIGURE 6. Similarly, CTE correlated well with predicted values as illustrated in FIGURE 7. Use of the prediction profiler function, as illustrated in FIGURE 8, shows CTE is proportional to increase in metal and dielectric thickness, which correlates with the stress modeling results.

FIGURE 6. JMP model of SED predicted vs. actual.

FIGURE 6. JMP model of SED predicted vs. actual.

FIGURE 7. JMP model of CTE predicted vs. actual.

FIGURE 7. JMP model of CTE predicted vs. actual.

FIGURE 8. CTE prediction as a function of metal and dielectric thickness

FIGURE 8. CTE prediction as a function of metal and dielectric thickness

Summary

Subtleties in thicknesses between the alternating Cu metal and dielectric layers within a build-up substrate can impact BLR performance. Two identical daisy chain substrate designs manufactured by different suppliers were compared head to head. A detailed package construction analysis revealed differences in bottommost Cu thickness layer within the substrate. This Cu thickness delta between the two substrate designs caused a change in CTE with supplier B (higher value) than supplier A due to thicker copper. Finite element modeling demon- strated relatively small macroscopic changes in CTE on the order of less than 2 percent can affect cycles to failure by 11 percent.

The key takeaway found from the head to head evaluation was supplier A producing a more stable process as it was able to meet the center point of the Cu thickness specification as compared to supplier B, which was off target. However, in essence, supplier A lost the head to head BLR comparative study with supplier B as its accuracy in meeting the Cu thickness target caused reduced solder joint fatigue life. The typical corrective action would be to work with supplier B to establish better tolerance control in their Cu plating process to stabilize Cu thickness at the center or nominal value like supplier A. However, the lesson learned was to tailor and control the Cu thickness at the higher end of the specification to improve reliability performance. Typically, in any setting the criteria of success is to hit the bullseye or target, which supplier A achieved. Conversely, supplier B missed this mark with results that were skewed to the right. Ironically, because of the skewed results off-target reliability margin was obtained. In reflection of these findings, the adage “success is in the eyes of the beholder” has never been more poignant.

JAIMAL WILLIAMSON is a packaging engineer responsible for development and qualification of Embedding Processing FCBGA devices within Texas Instruments’ Worldwide Semiconductor Packaging group.

CEA-Leti today announced it has developed two techniques to induce local strain in FD-SOI processes for next-generation FD-SOI circuits that will produce more speed at the same, or lower, power consumption, and improve performance.

The local-strain solutions are dual-strained technologies: compressive SiGe for PFETs and tensile Si for NFETs. In addition to clearing the path to improved performance in FD-SOI technology, they preserve its excellent electrostatic integrity and its in situ performance tunability, due to back biasing.

The two techniques Leti developed can induce local stress as high as 1.6 GPa in the MOSFETs channel.

The first relies on strain transfer from a relaxed SiGe layer on top of SOI film. In a recent paper in the ECS Journal of Solid State Science and Technology, Leti researcher Sylvain Maitrejean described how with this technique he was able to boost the short-channel electron mobility by more than 20 percent compared to unstrained reference. This shows significant promise for enhancing the on-state currents of CMOS transistors and thus for improving the circuit’s speed.

The second technique is closer to strain memorization methods and relies on the ability of the BOX to creep under high-temperature annealing. At SSDM 2015 in Japan, Leti researchers showed that with this local-stress technique they can turn regular unstrained SOI structures into tensile strained Si (sSOI), for NFET areas. Moreover, this “BOX-creep” process also can also be applied to compressive strain creation, as presented at the 2015 Silicon Nanoelectronics Workshop (SNW) conference.

Strained channels enable an increase in the on-state current of CMOS transistors. As a result, the corresponding IC circuits can deliver more speed at the same power, or reduced consumed power and longer battery life at the same performance.

They also have been proven to be an effective way to increase performance of n and p MOSFET transistors via mobility enhancement of electrons and holes. These kinds of techniques enable boosting of the carrier transport in the CMOS channels, and thus increasing the on-state currents. Beginning with the 90nm node, this strain option has been one of the main approaches of the microelectronics industry for improving the IC speed in bulk transistors. While it was not necessary at the 28nm node for FD-SOI, it becomes mandatory beyond the 22/20nm node.

“Leti has continuously focused on improving and fine-tuning FD-SOI technology’s inherent advantages, since pioneering the technology 20 years ago,” said Maud Vinet, head of Leti’s Advanced CMOS Laboratory. “These two new techniques broaden the capabilities of Leti’s FD-SOI platform for next-generation devices, and further position the technology to be a vital part of the Internet of Things and electronics products of the future.”

SSDM 2015: Stress profile from 2D Raman extractions for Si MESAs after BOX creep process with 50nm thick SiN

SSDM 2015: Stress profile from 2D Raman extractions for Si MESAs after BOX creep process with 50nm thick SiN

National Institute of Standards and Technology (NIST) researchers are seeing the light, but in an altogether different way. And how they are doing it just might be the semiconductor industry’s ticket for extending its use of optical microscopes to measure computer chip features that are approaching 10 nanometers, tiny fractions of the wavelength of light.

Using a novel microscope that combines standard through-the-lens viewing with a technique called scatterfield imaging, the NIST team accurately measured patterned features on a silicon wafer that were 30 times smaller than the wavelength of light (450 nanometers) used to examine them. They report that measurements of the etched lines–as thin as 16 nanometers wide–on the SEMATECH-fabricated wafer were accurate to one nanometer. With the technique, they spotted variations in feature dimensions amounting to differences of a few atoms.

Measurements were confirmed by those made with an atomic force microscope, which achieves sub-nanometer resolution, but is considered too slow for online quality-control measurements. Combined with earlier results, the NIST researchers write, the new proof-of-concept study* suggests that the innovative optical approach could be a “realistic solution to a very challenging problem” facing chip makers and others aiming to harness advances in nanotechnology. All need the means for “nondestructive measurement of nanometer-scale structures with sub-nanometer sensitivity while still having high throughput.

“Light-based, or optical, microscopes can’t “see” features smaller than the wavelength of light, at least not in the crisp detail necessary for making accurate measurements. However, light does scatter when it strikes so-called subwavelength features and patterned arrangements of such features. “Historically, we would ignore this scattered light because it did not yield sufficient resolution,” explains Richard Silver, the physicist who initiated NIST’s scatterfield imaging effort. “Now we know it contains helpful information that provides signatures telling us something about where the light came from.”

With scatterfield imaging, Silver and colleagues methodically illuminate a sample with polarized light from different angles. From this collection of scattered light–nothing more than a sea of wiggly lines to the untrained eye–the NIST team can extract characteristics of the bounced lightwaves that, together, reveal the geometry of features on the specimen.

Light-scattering data are gathered in slices, which together image the volume of scattered light above and into the sample. These slices are analyzed and reconstructed to create a three-dimensional representation. The process is akin to a CT scan, except that the slices are collections of interfering waves, not cross-sectional pictures.

“It’s the ensemble of data that tells us what we’re after,” says project leader Bryan Barnes.” We may not be able see the lines on the wafer, but we can tell you what you need to know about them–their size, their shape, their spacing.”

Scatterfield imaging has critical prerequisites that must be met before it can yield useful data for high-accuracy measurements of exceedingly small features. Key steps entail detailed evaluation of the path light takes as it beams through lenses, apertures and other system elements before reaching the sample. The path traversed by light scattering from the specimen undergoes the same level of scrutiny. Fortunately, scatterfield imaging lends itself to thorough characterization of both sequences of optical devices, according to the researchers. These preliminary steps are akin to error mapping so that recognized sources of inaccuracy are factored out of the data.

The method also benefits from a little advance intelligence–the as-designed arrangement of circuit lines on a chip, down to the size of individual features. Knowing what is expected to be the result of the complex chip-making process sets up a classic matchup of theory vs. experiment.

The NIST researchers can use standard equations to simulate light scattering from an ideal, defect-free pattern and, in fact, any variation thereof. Using wave analysis software they developed, the team has assembled an indexed library of light-scattering reference models. So once a specimen is scanned, the team relies on computers to compare their real-world data to models and to find close matches.

From there, succeeding rounds of analysis homes in on the remaining differences, reducing them until the only ones that remain are due to variations in geometry such as irregularities in the height, width, or shape of a line.

Measurement results achieved with the NIST approach might be said to cast light itself in an entirely new light. Their new study, the researchers say, shows that once disregarded scattered light “contains a wealth of accessible optical information.”

Next steps include extending the technique to even shorter wavelengths of light, down to ultraviolet, or 193 nanometers. The aim is to accurately measure features as small as 5 nanometers.

This work is part of a larger NIST effort to supply measurement tools that enable the semiconductor industry to continue doubling the number of devices on a chip about every two years and to help other industries make products with nanoscale features. Recently, NIST and Intel researchers reported using an X-ray technique to accurately measure features on a silicon chip to within fractions of a nanometer.

Physicists at the Technical University of Munich, the Los Alamos National Laboratory and Stanford University (USA) have tracked down semiconductor nanostructure mechanisms that can result in the loss of stored information – and halted the amnesia using an external magnetic field. The new nanostructures comprise common semiconductor materials compatible with standard manufacturing processes.

Quantum bits, qubits for short, are the basic logical elements of quantum information processing (QIP) that may represent the future of computer technology. Since they process problems in a quantum-mechanical manner, such quantum computers might one day solve complex problems much more quickly than currently possible, so the hope of researchers.

In principle, there are various possibilities of implementing qubits: photons are an option equally as viable as confined ions or atoms whose states can be altered in a targeted manner using lasers. The key questions regarding their potential use as memory units are how long information can be stored in the system and which mechanisms might lead to a loss of information.

A team of physicists headed by Alexander Bechtold and Professor Jonathan Finley at the Walter Schottky Institute of the Technical University of Munich and the Cluster of Excellence Nanosystems Initiative Munich (NIM) have now presented a system comprising a single electron trapped in a semiconductor nanostructure. Here, the electron’s spin serves as the information carrier.

By evaporating indium gallium arsenide onto a gallium arsenide substrate TUM physicists created nanometer-scale hills, so-called quantum dots. An electron trapped in one of these quantum dots can be used to store information. Hitherto unknown memory loss mechanisms could be switched off by applying a magnetic field. Credit:  Fabian Flassig / TUM

By evaporating indium gallium arsenide onto a gallium arsenide substrate TUM physicists created nanometer-scale hills, so-called quantum dots. An electron trapped in one of these quantum dots can be used to store information. Hitherto unknown memory loss mechanisms could be switched off by applying a magnetic field. Credit:
Fabian Flassig / TUM

The researchers were able to precisely demonstrate the existence of different data loss mechanisms and also showed that stored information can nonetheless be retained using an external magnetic field.

Electrons trapped in a quantum dot

The TUM physicists evaporated indium gallium arsenide onto a gallium arsenide substrate to form their nanostructure. As a result of the different lattice spacing of the two semiconductor materials strain is produced at the interface between the crystal grids. The system thus forms nanometer-scale “hills” – so-called quantum dots.

When the quantum dots are cooled down to liquid helium temperatures and optically excited, a singe electron can be trapped in each of the quantum dots. The spin states of the electrons can then be used as information stores. Laser pulses can read and alter the states optically from outside. This makes the system ideal as a building block for future quantum computers.

Spin up or spin down correspond to the standard logical information units 0 and 1. But, on top of this come additional intermediate states of quantum mechanical up and down superpositions.

Hitherto unknown memory loss mechanisms

However, there is one problem: “We found out that the strain in the semiconductor material leads to a new and until recently unknown mechanism that results in the loss of quantum information,” says Alexander Bechtold. The strain creates tiny electric fields in the semiconductor that influence the nuclear spin orientation of the atomic nuclei.

“It’s a kind of piezoelectric effect,” says Bechthold. “It results in uncontrolled fluctuations in the nuclear spins.” These can, in turn, modify the spin of the electrons, i.e. the stored information. The information is lost within a few hundred nanoseconds.

In addition, Alexander Bechthold’s team was able to provide concrete evidence for further information loss mechanisms, for example that electron spins are generally influenced by the spins of the surrounding 100,000 atomic nuclei.

Preventing quantum mechanical amnesia

“However, both loss channels can be switched off when a magnetic field of around 1.5 tesla is applied,” says Bechtold. “This corresponds to the magnetic field strength of a strong permanent magnet. It stabilizes the nuclear spins and the encoded information remains intact.”

“Overall, the system is extremely promising,” according to Jonathan Finley, head of the research group. “The semiconductor quantum dots have the advantage that they harmonize perfectly with existing computer technology since they are made of similar semiconductor material.” They could even be equipped with electrical contacts, allowing them to be controlled not only optically using a laser, but also using voltage pulses.

A new era of electronics and even quantum devices could be ushered in with the fabrication of a virtually perfect single layer of “white graphene,” according to researchers at the Department of Energy’s Oak Ridge National Laboratory.

Growth and transfer of 2-D material such as hexagonal boron nitride and graphene was performed by a team that included Yijing Stehle of Oak Ridge National Laboratory. Credit: ORNL

Growth and transfer of 2-D material such as hexagonal boron nitride and graphene was performed by a team that included Yijing Stehle of Oak Ridge National Laboratory. Credit: ORNL

The material, technically known as hexagonal boron nitride, features better transparency than its sister, graphene, is chemically inert, or non-reactive, and atomically smooth. It also features high mechanical strength and thermal conductivity. Unlike graphene, however, it is an insulator instead of a conductor of electricity, making it useful as a substrate and the foundation for the electronics in cell phones, laptops, tablets and many other devices.

“Imagine batteries, capacitors, solar cells, video screens and fuel cells as thin as a piece of paper,” said ORNL’s Yijing Stehle, postdoctoral associate and lead author of a paper published in Chemistry of Materials. She and colleagues are also working on a graphene hexagonal boron 2-D capacitor and fuel cell prototype that are not only “super thin” but also transparent.

With their recipe for white graphene, ORNL researchers hope to unleash the full potential of graphene, which has not delivered performance consistent with its theoretical value. With white graphene as a substrate, researchers believe they can help solve the problem while further reducing the thickness and increasing the flexibility of electronic devices.

While graphene, which is stronger and stiffer than carbon fiber, is a promising material for data transfer devices, graphene on a white graphene substrate features several thousand times higher electron mobility than graphene on other substrates. That feature could enable data transfers that are much faster than what is available today. “Imagine your message being sent thousands of times faster,” Stehle said.

Stehle noted that this work is especially significant because it takes the material beyond theory. A recent theoretical study led by Rice University, for instance, proposed the use of white graphene to cool electronics. Stehle and colleagues have made high-quality layers of hexagonal boron nitride they believe can be cost-effectively scaled up to large production volumes.

“Various hexagonal boron nitride single crystal morphology – triangle to hexagon – formulations have been mentioned in theoretical studies, but for the first time we have demonstrated and explained the process,” Stehle said.

That process consists of standard atmospheric pressure chemical vapor deposition with a similar furnace, temperature and time, but there’s a twist. The difference is what Stehle describes as “a more gentle, controllable way to release the reactant into the furnace and figuring out how to take advantage of inner furnace conditions. These two factors are almost always neglected.”

Stehle continued: “I just thought carefully beforehand and was curious. For example, I remind myself that there are many conditions in this experiment that can be adjusted and could make a difference. Whenever I see non-perfect results, I do not count them as another failure but, instead, another condition adjustment to be made. This ‘failure’ may become valuable.”

Co-authors of the paper. are Harry Meyer, Raymond Unocic, Michelle Kidder, Georgios Polizos, Panos Datskos, Roderick Jackson and Ivan Vlassiouk of ORNL and Sergei Smirnov of New Mexico State University. Funding was provided by the Laboratory Directed Research and Development program. A portion of the research was conducted at the Center for Nanophase Materials Science, a DOE Office of Science User Facility at ORNL.

UT-Battelle manages ORNL for the DOE’s Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.

Researchers from North Carolina State University have discovered a new phase of solid carbon, called Q-carbon, which is distinct from the known phases of graphite and diamond. They have also developed a technique for using Q-carbon to make diamond-related structures at room temperature and at ambient atmospheric pressure in air.

Phases are distinct forms of the same material. Graphite is one of the solid phases of carbon; diamond is another.

“We’ve now created a third solid phase of carbon,” says Jay Narayan, the John C. Fan Distinguished Chair Professor of Materials Science and Engineering at NC State and lead author of three papers describing the work. “The only place it may be found in the natural world would be possibly in the core of some planets.”

Q-carbon has some unusual characteristics. For one thing, it is ferromagnetic — which other solid forms of carbon are not.

“We didn’t even think that was possible,” Narayan says.

In addition, Q-carbon is harder than diamond, and glows when exposed to even low levels of energy.

“Q-carbon’s strength and low work-function — its willingness to release electrons — make it very promising for developing new electronic display technologies,” Narayan says.

But Q-carbon can also be used to create a variety of single-crystal diamond objects. To understand that, you have to understand the process for creating Q-carbon.

Researchers start with a substrate, such as such as sapphire, glass or a plastic polymer. The substrate is then coated with amorphous carbon — elemental carbon that, unlike graphite or diamond, does not have a regular, well-defined crystalline structure. The carbon is then hit with a single laser pulse lasting approximately 200 nanoseconds. During this pulse, the temperature of the carbon is raised to 4,000 Kelvin (or around 3,727 degrees Celsius) and then rapidly cooled. This operation takes place at one atmosphere — the same pressure as the surrounding air.

The end result is a film of Q-carbon, and researchers can control the process to make films between 20 nanometers and 500 nanometers thick.

By using different substrates and changing the duration of the laser pulse, the researchers can also control how quickly the carbon cools. By changing the rate of cooling, they are able to create diamond structures within the Q-carbon.

“We can create diamond nanoneedles or microneedles, nanodots, or large-area diamond films, with applications for drug delivery, industrial processes and for creating high-temperature switches and power electronics,” Narayan says. “These diamond objects have a single-crystalline structure, making them stronger than polycrystalline materials. And it is all done at room temperature and at ambient atmosphere – we’re basically using a laser like the ones used for laser eye surgery. So, not only does this allow us to develop new applications, but the process itself is relatively inexpensive.”

And, if researchers want to convert more of the Q-carbon to diamond, they can simply repeat the laser-pulse/cooling process.

If Q-carbon is harder than diamond, why would someone want to make diamond nanodots instead of Q-carbon ones? Because we still have a lot to learn about this new material.

“We can make Q-carbon films, and we’re learning its properties, but we are still in the early stages of understanding how to manipulate it,” Narayan says. “We know a lot about diamond, so we can make diamond nanodots. We don’t yet know how to make Q-carbon nanodots or microneedles. That’s something we’re working on.”

NC State has filed two provisional patents on the Q-carbon and diamond creation techniques.