Tag Archives: letter-wafer-tech

Sub-Fab Fault Detection and Classification (FDC) software platforms collect, integrate and analyze operational data.

BY ERIK COLLART, Edwards, Sanborn, NY

The ability to provide the reliable high-quality vacuum environment that most semiconductor manufacturing requires is an often and easily overlooked aspect of the whole fab process. The unexpected failure of a vacuum pump can bring significant disruption to the manufacturing process, potentially imposing a heavy penalty in lost productivity and scrapped product. The sub-fab, where vacuum and abatement systems are typically located and so named because it is located literally below the fab floor, has evolved dramatically over the years, from simply a location outside the fab in which to house supporting equipment, to an environment that is in many ways as sophisticated as the fab itself. Just as manufacturers have adopted advanced monitoring and data analytics to optimize fab operations, they are finding significant benefit in applying the same techniques to sub-fab operations. Sub-Fab Fault Detection and Classification (FDC) software platforms such as Edcentra, Edwards’ newest equipment monitoring, data acquisition and analytics platform collect, integrate and analyze operational data from the sub-fab, providing a comprehensive solution to vacuum security.

The challenge of cost-effective innovation

The semiconductor industry faces many challenges, including the high pace of innovation and the need to constantly improve operational efficiencies, decrease costs, reduce adverse environmental impact and ensure the safety of personnel in the fab and residents of the surrounding community. Some of the ways these challenges have been met in the past no longer apply. For instance, although there is still device scaling in new technology nodes, the type of simple geometric device scaling driven by Constant-Field Scaling rules [1] – to drive innovation, improve efficiency and reduce costs per die – effectively ran out a decade ago.

Innovation has continued, though along very different lines, introducing ever more complex device architectures and increasing the use of exotic materials and manufacturing methods, such as epitaxial and atomic layer deposition. These innovations have all extended development time and time-to-market, driven up cost, reduced efficiency (lower yields, more frequent equipment preventive maintenance cycles) and brought new and higher environmental restrictions (stringent local, national and international regulations, as on CO2 emissions) and safety challenges (toxic precursor materials and waste products). Delivering timely and cost-effective innovation is now a major issue for the semiconductor industry. In response to this challenge, manufacturers have recognized the strategic necessity of integrating and analyzing all the information available from their processes. These manufacturers are therefore starting to adopt an integrated fab data and information management approach that accounts for all the factors affecting time-to-market at the lowest possible costs. The sub-fab and associated support systems cannot be omitted from this approach.

The importance of vacuum

Most of the critical steps in a chip manufacturing process are conducted under high vacuum conditions and vacuum quality is one of the most important parameters in these process step. Vacuum quality is a combination of vacuum level and vacuum content. No vacuum is absolute, and there are always trace amounts of non-process gases present in process chambers that can have a major impact on the process, if not controlled.

As any fab equipment or process engineer will tell you, maintaining vacuum quality is so important that pumps are almost never shut off and process chambers are almost never brought to atmospheric pressure, even when idle for long periods of time. Maintenance activities on process chambers are performed, whenever possible, with minimal or no exposure to atmosphere. This is for a very good reason: once a chamber has been vented to atmosphere it may take a very long time to return it to the previous known-good-vacuum state, affecting equipment uptime and process yield.

The vacuum state can therefore affect wafer quality and overall fab costs through its effect on yield or through losses incurred as a result of unplanned vacuum failures during wafer processing. For example, insufficient vacuum levels or trace amounts (ppm level) of unintended gases, such as O2 or H2O, in an ion implant process can greatly reduce the stability of high voltage power supplies, leading, in turn, to fluctuations in ion beam current, non-uniform implant conditions on the wafer, and ultimately to poor and non-reproducible wafer yields. A pump “crash” during a batch process that causes the scrap of an entire production batch–normally 125 wafers–is very costly in both direct product loss and process downtime. Even in single wafer process, unplanned pump failure can cause significant losses as some process tools require days or weeks to requalify.

Fab managers face a difficult choice between the costs of vacuum failure and the costs of too frequent maintenance. Optimizing this choice is one area where sub-fab equipment monitoring and advanced analytics can make an important contribution. Most effective optimization occurs in an adaptive maintenance regime, where pump maintenance is performed in parallel to tool mainte- nance, thereby virtually eliminating vacuum pumps as a cause of lost tool time. Long prediction horizons are required for successful adaptive maintenance, enabling the longer PM intervals (months) typical of sub-fab equipment to be synchronised to the shorter PM intervals (weeks) of the fab process equipment. For this to happen, and thereby assuring sustained vacuum quality, additional types of sensors and improved predictive capabilities and time horizons will be needed. The remainder of this paper highlights Edwards’ exploratory work on using mechanical vibration sensor data to obtain a reliable and long prediction horizon for mechanical failure modes [2].

Failure prediction using vibrational sensor data

Monitoring vibrations to assess the health of rotating machines has a long and successful history. Intrinsic bearings frequencies can be calculated from rotation speeds, and wear-generated perturbations to these frequencies can be detected to predict bearing failures and other mechanical failure modes. However, these existing methods do not translate well to a semicon- ductor environment where process-induced failure modes are more frequent. The sub-fab working environment also tends to be extremely noisy from a vibration spectrum perspective and the effects of process induced failure modes on standard vibration spectra are largely unknown.

We have developed a new method of unlocking key predictive information (Fault Detection or FD) from vibration data, based on a “fingerprinting” technique, which translates complex, noisy data into a single dynamic coefficient that can be compared easily with existing predictive maintenance parameters. Further vibrational sub-band analysis provides specific failure mode identifi- cation and root-cause analysis, thus providing a key fault classification (FC) capability. This method will be referred to as Vibration Indicator or VI from here on.

Results

FIGURE 1 shows an example of the power of VI to extend visibility of a catastrophic bearing failure in a fab working environment. A departure of VI from zero indicates the emerging signature of mechanical bearings wear. The time horizon in this example is at least 60 days, providing extended visibility and increased process security.

A second fab-based production environment example, taken from an LP-CVD Si3N4 batch deposition process and shown in FIGURE 2, illustrates the sensitivity and predictive power of VI compared to traditional pump parameters: power and temperature. The ultimate cause of failure in this case was deposition-related. As can be seen, from day 60 onward changing process conditions caused a step-change in the temperature. The power curve develops patterns of spike behavior around day 120. Previously existing best-known-methods (BKM) for predictive maintenance, based on analysis of power and temperature data, can detect this emerging behavior using spike- area and frequency-based techniques, in this case with a time horizon of 40 days. The key obser- vation in this example is that VI (blue curve) reacted immediately to an increased deposition of condensable materials, which led directly to an equipment failure 90 days later. The VI provided a time-to- failure horizon of 90 days (55% of observed pump life), more than double that of traditional parameters.

Accelerated lab testing provides further evidence of the extended time horizon VI affords. FIGURE 3 shows the results of a lab-based accelerated fluorine (F2) corrosion induced mechanical failure mode, with large F2 gas flows injected into the vacuum system and pump. The traditional power parameter is completely insensitive to the F2 flow and resulting corrosion. The VI, by contrast, shows a linear correlation with total accumulated flow, providing both early detection and a measure of the severity of the developing problem.

Screen Shot 2017-07-28 at 2.28.21 PM

A second lab-based test (not shown) investigated the effects of oil contaminants and again confirmed the ability of VI to detect and quantify failure modes inaccessible to established methods. As in the corrosion example, a linear correlation was found between accumulated contamination and VI, while power measurements proved to be completely insen- sitive to oil contamination levels.

These show that VI can significantly extend the time horizon of equipment failure modes, well beyond current predictive capabilities and into the regime where effective maintenance pooling and the resultant cost savings can be realized. Moreover, these results can be translated into precise RUL predictions using various parameter estimator techniques, complementing standard Weibull techniques. FIGURE 4 shows the results of an accelerated bearings failure lab test for a dry pump, comparing VI and estimated RUL.

Screen Shot 2017-07-28 at 2.28.33 PMScreen Shot 2017-07-28 at 2.28.38 PM

Performance comparison

Tables 1 and 2 compare and contrast VI performance with mainstream SPC-like control methods, such a single parameter threshold monitoring and multi-variate analysis (BKMS-F), in terms of detection capability, sensitivity, prediction time horizon and hit rate vs. false positives. Table 1 shows that VI considerably extends prediction time horizon and, based on data gathered to date for detectable results, has demon- strated a 100 percent hit rate with no false positives. From table 2 we see that VI extends predictability to mechanical failures, has high sensitivity, and detects problems as soon as they begin.

Screen Shot 2017-07-28 at 2.28.45 PM

Summary and conclusions

The need for increased operational efficiency in semiconductor manufacturing is driving the development of smarter interconnected vacuum sub-systems and the adoption of integrated data and information management technologies. A case study described the combined use of the EdCentra sub-fab information management system and an innovative approach to vibrational analysis. Compared to current mainstream methods, VI provided an extended, and in some cases unique, predictive maintenance capability for mechanical pump failures and a very high level of sensitivity. For the data gathered so far on detectable faults, the hit rate has been 100 percent, with no false positives. Finally, advanced analytics and VI consid- erably extended the prediction time horizon from weeks to months. Together with existing predictive algorithms and methodologies for pumps, abatement and ancillary equipment, the capabilities provided by advanced information management and innovative monitoring technologies like VI have the potential to significantly reduce costs and increase productivity.

Acknowledgements

The author would like to acknowledge and thank Antonio Serapligia and Angelo Maiorana for their ground-breaking work on vibrational analysis, and David Hacker and Alan Ifould for their inputs on the challenges and opportunities of sub-fab equipment maintenance and many fruitful discussions.

References

1. Dennard, Robert H.; Gaensslen, Fritz; Yu, Hwa-Nien; Rideout, Leo; Bassous, Ernest; LeBlanc, Andre (October 1974). “Design of ion-implanted MOSFET’s with very small physical dimensions” (PDF). IEEE Journal of Solid State Circuits. SC–9 (5).
2. Antonio Serapiglia, David Hacker, Erik J Collart, Alan Ifould, and Angelo Maiorana 28th Advanced Process Control Conference Proceedings, Mesa, Arizona, 2016

In-line metrology methods used during extreme wafer thinning process pathfinding and development are introduced.

BY M. LIEBENS, A. JOURDAIN, J. DE VOS, T. VANDEWEYER, A. MILLER, E. BEYNE, imec, Leuven, Belgium & S. LI, G. BAST, M. STOERRING, S. HIEBERT, A. CROSS, KLA-Tencor Corporation, Milpitas, California

The pace of innovation in device packaging techniques has never been faster or more interesting as at the present time. Previously, data were sent through wires where in recent packages, components are connected directly using different 3D interconnect technologies. As the 3D interconnect density is increasing exponentially, pitches need to reduce to 5μm and below. Current interconnect technologies of 3D-SIC (3D-Stacked IC) do not offer such high densities. Parallel front-end of line wafer processing in combination with wafer-to-wafer (W2W) bonding and extreme wafer thinning steps in the 3D-SOC (3D System On Chip) integration technology schemes, as depicted in FIGURE 1, enable the increase of 3D interconnect density.

Screen Shot 2017-07-28 at 1.58.33 PMScreen Shot 2017-07-28 at 1.58.39 PM

During the extreme wafer thinning process pathfinding and development, different thinning techniques like grinding, polishing and etching were evaluated in [1] and [2] to target a final Si thickness specification of 5μm. For the comparison of the thinning techniques, multiple success criteria were defined to which the thinning process must initially comply. Firstly, the final Si thickness (FST) across the wafer needs to be within certain limits to achieve, for example, a stable via-last etch process with requirements to land on correct metal layers. Secondly, the thinning process may not induce damage on the top Si across the wafer and especially at the wafer edge which would directly impact the physical yield of the complete wafer stack. Finally, the wafer surface nanotopography (NT), shape and flatness need to be in control to ensure proper subsequent W2W bonding when going to multi- wafer stacks beyond N=2. To allow us to achieve these challenging criteria the metrology systems used must cope with areas of the wafer previously deemed to be in the “minimal care zone” of 1 – 2mm from the wafer edge. The wafer edge characterization must also go hand in hand with patterned wafer topography after thinning to maximize physical wafer yield.

In this paper, the in-line metrology methods used during the extreme wafer thinning process pathfinding and development are introduced. These metrology tools supplied results that enabled us to determine where the extreme wafer thinning process can be improved. The same techniques can eventually be used to validate the improvements and to monitor process stability when processes are released for volume production.

Metrology methods

Wafer Level Interferometry. For FST measurement and wafer surface shape and NT, a patterned wafer geometry system (KLA-Tencor’s WaferSightTM PWG) was used. This is a dual Fizeau interferometry system and simultaneously measures both the front surface and back surface height of patterned wafers at high spatial resolution. During the measurement, the wafer is supported in a vertical position to reduce any wafer distortion. The whole wafer acquisition is completed in a single shot allowing measurement of the front and back surface topography as well as wafer flatness and edge roll-off.

This tool is specifically designed for wafer geometry measurements with 1nm measurement precision and has previously been used to qualify the impact of wafer geometry on CMP in [3] and [4] and to determine the NT of a full wafer post CMP [5]. Using the device layout, the full-wafer NT map can be divided into individual dies and the range or peak-valley (PV) value can be the output for each individual die.

For this paper, the patterned wafer geometry (PWG) system is used to measure wafer thickness at multiple steps during W2W bonding and extreme wafer thinning to derive the final Si thickness of the top wafer after thinning. The thickness results as supplied by PWG is the relative height variation measured by interfer- ometry, with respect to the local absolute wafer thickness measured by a capacitive sensor before the interferometry measurement is performed. The tool can supply 2D and 3D representations of the wafer thickness measurement at high spatial resolution as depicted in FIGURE 2.

Screen Shot 2017-07-28 at 1.58.53 PM

Wafer Edge Inspection and Metrology. The all-surface wafer inspection and metrology system utilized (KLA-Tencor’s CIRCL-APTM) contains an edge inspection module. This module uses: (1) a laser scanning setup revolving around the wafer bevel; and, (2) a lateral edge profile camera acquiring images of the wafer edge while the wafer is rotating. The laser scan comprises the laser, multi-channel optics and photodetectors/photomultiplier tube (PMT).

The lateral edge profile images are used to measure and quantify the edge shape and edge trim dimensions (see FIGURE 3). Based on the edge shape, an optimal trajectory of the revolving optics is calculated for profile-corrected inspection to ensure proper incident of light on the wafer sample and to obtain good signal-to-noise ratio.

Screen Shot 2017-07-28 at 1.59.00 PM

The revolving laser scanner is used to perform simulta- neous edge inspection and metrology using brightfield, darkfield and phase-contrast modes to capture a broad range of wafer edge defect types with sensitivity down to 0.5μm. Images are acquired in the different contrast modes from all zones comprising the wafer edge, i.e. top and bottom near-edge (5mm), top and bottom bevel, and apex. Part of a full wafer edge inspection image, including notch, is shown in FIGURE 4.

Screen Shot 2017-07-28 at 1.59.07 PM

Inspection is performed basically by comparing neigh- boring pixels on a tangential line. Pixels with a contrast or gray value difference exceeding a certain user-defined threshold are considered to be part of defects. Using rule-based binning techniques and by defining regions of interest and care areas, a high defect classification accuracy and purity of the defects of interest can be achieved by the implemented defect classification strategy.

Metrology is performed by detecting edge transitions on radial lines enabling characterization of coverage, concentricity and uniformity of layers, films or other line features on the wafer edge.
Front Side Metrospection. The all-surface wafer inspection and metrology system also contains a front side inspection module that uses: (1) time-delay-integration (TDI) technology with concurrent brightfield (BF) and darkfield (DF) inspection channels; (2) bright LED illumination for precision and stability; and, (3) a set of recipe-selectable objectives to give different lateral resolutions.

The TDI camera detects an interference signal from the top and bottom surfaces of thinned Si. An example of such fringes is shown in FIGURE 5. The front side inspection module uses three illumination colors (RGB) that give three sets of interference signals, each has its own characteristic amplitude and frequency. By analyzing these signals, the Si thickness at the edge of the thinned wafer can be determined. The high resolution optics of the front side inspection module enables accurate thickness measurement when the edge rolls off rapidly.

Screen Shot 2017-07-28 at 1.59.15 PM Screen Shot 2017-07-28 at 1.59.21 PM

Results

Edge Defectivity. Using edge defect inspection and classification, it was possible to compare different wafer thinning process sequences with respect to grinding-induced damage, edge chipping and delamination, and to fine-tune the process by minimizing the defect count of these defects of interest.

FIGURE 6 is showing the results from automated edge defect inspection of wafers which received two different thinning process sequences. By placing inspection care areas on the regions of interest, i.e. near the wafer edge of the top thinned wafer, and by specifying defect classification rules, the inspection detected edge chippings and classified them accordingly with high accuracy. The defect count of detected edge chippings on the wafer thinned by approach A was significantly higher than on the wafer thinned by approach B. The edge integrity was better maintained when wafers are thinned using approach B. The details of the process sequences can be found in [1].

Screen Shot 2017-07-28 at 1.59.32 PM Screen Shot 2017-07-28 at 1.59.40 PM

When further exploring thinning approach B, a detailed edge inspection showed that the thinning process sequence induced a lateral shrinkage of the top wafer besides the normal wafer thinning, resulting in pattern exposure from the landing wafer as can be seen on the right inspection image of Fig. 6.

Global Wafer Thickness. The most important element in the extreme wafer thinning process is a precise control of the FST, and its variation, with a maximum 3σ repeatability of 50nm to obtain a precision-to-tolerance ratio smaller than or equal to 0.1. The FST was measured by PWG and is the subtraction of the thickness measurement of the bottom wafer from the thickness measurement of the wafer stack after bonding and thinning, according to below equation.

The different components of this equation are depicted in FIGURE 7. Thickness #2(x,y) is the thickness of the total stack after W2W bonding and thinning. Thickness #1(x,y) is the thickness of the bottom wafer. Finally, to know the FST of the top wafer, the thickness of the dielectrics on top and bottom wafer are subtracted. The latter thickness is considered to be constant since the variation of the dielectric thickness is negligible compared to the variation of FST.

Screen Shot 2017-07-28 at 2.01.24 PM

FIGURE 8 shows the thickness profile of the top Si layer after the thinning process sequence as measured by PWG. The FST varied about 2μm center-to-edge, with a strong gradient when approaching the wafer edge. Between wafer edge and 2mm from the wafer edge, it becomes challenging for standard wafer metrology tools to measure the thickness profile. Reasons are the wafer edge exclusion imposed by the tool and the non-opaque behavior of Si at a certain thickness in function of the wavelength applied by the metrology tool. The CIRCL-AP was used to investigate the edge profile of the top wafer to complete the full wafer charac- terization of the FST. Result details are elaborated in the following sections.

The results of the PWG measurements showed a clear correlation with standard ellipsometry-based metrology measurements, as can be seen in FIGURE 9. The advantage of PWG over ellipsometry is that more points on the wafer are measured at higher throughput and results are more reliable with the presence of patterns in the complex stack of 3D-SOC W2W bonded wafers.

Edge Metrology. For the wafer edge profile of the bonded wafer pair after thinning, it is expected to see a stepwise decrease of the FST of the top wafer due to the edge trim of the top wafer before bonding (FIGURE 10). However, the FST showed a slower decrease when approaching the wafer edge.

Screen Shot 2017-07-28 at 2.01.39 PM

With the edge metrology function, CIRCL-AP was capable to detect and report from what radius the final Si thickness starts to decrease, as depicted in FIGURE 11. It is expected to see a uniform area of the top wafer top surface that extends to a radius of about 149.5mm, in case the top wafer received an edge trim width of 0.5mm. However, from radius 147.5mm, the FST started already to decrease towards the wafer edge. This decrease is the lateral shrinkage that was mentioned previously when discussing the results presented in Fig. 6.

Screen Shot 2017-07-28 at 2.01.54 PM

Edge Thickness. The lateral shrinkage was further confirmed by detailed thickness measurements focusing on the wafer edge using the CIRCL-AP’s front side inspection module. The inspection tool with metrology capabilities (metrospection) showed the thickness profile and quantified the decrease as a function of wafer radius R and angle θ as depicted in FIGURE 12. There is a gradual thickness decrease noticed from 3μm to 0μm indicating that there is no Si left in a 2mm ring at the edge while the initial edge trim width was 0.5mm only.

Process improvement

The FST profile and edge shape of the top wafer are characterized by using previously described metrology techniques. To enable a stable and robust via-last process and to realize multi-wafer stacking, the FST variation needs to decrease below 1μm and the lateral shrinkage needs to be minimized. The optimization of the wafer thinning process sequence is ongoing work by applying different hardware configurations, tuning the processes and validating whether requirements are met by using the same metrology techniques as described in this paper.

Conclusions

We have shown the capability of two complementary metrology tools to characterize the extreme wafer thinning process. This tool set can also be implemented to control the performance in a production environment at high throughput. Excursions can be analyzed further using techniques like in-line AFM. When thinning Si to 5μm and below for 3D-SOC integration technology schemes, multiple challenges arise where different measurement techniques are needed to characterize the final Si thickness across the full wafer. A good control of the final Si thickness as well as the total thickness variation (TTV) will become important when further scaling down 3D interconnects and increasing their density.

Acknowledgements

Authors would like to thank Fumihiro Inoue, Nina Tutunjyan, Stefano Sardo and Edward Walsby for supplying wafers to inspect and measure, for the interpretation and discussion of the results afterwards, and the early involvement of metrology in the process developments. This paper was previously published in the Proceedings of the 28th Annual SEMI Advanced Semiconductor Manufacturing Conference (ASMC 2017), Saratoga Springs, NY, 2017, pp. 331-336.

References

1. A. Jourdain, “Extreme Wafer Thinning Optimization for Via-Last Applications,” 3DIC, November 2016.
2. F. Inoue, “Characterization of Extreme Si Thinning Process for Wafer- to-Wafer Stacking,” ECTC, May 2016.
3. K. Freischlad, S. Tang, and J. Grenfel, “Interferometry for wafer dimensional metrology,” Proceedings of SPIE, 6672, 667202 (2007).
4. P. Vukkadala, K. T. Turner, and J. K. Sinha, “Impact of Wafer Geometry on CMP for Advanced Nodes,” Journal of the Electro- chemical Society, 158(10), p. H1002 (2011).
5. L. Teugels, “Within-die and within-wafer CMP process characterization and monitoring using PWG Fizeau interferometry system,” ICPT, October 2016.
6. C. Mehanian et al., “Systems and Method for Simultaneously Inspecting a Specimen with Two Distinct Channels,” US Patent 7,782,452, issued August 2010.

Using scanning capacitance microscopy with a Park Systems atomic force microscope a team at NASA successfully characterized both the spatial variations in capacitance as well as the topography of vacuum-channel nanoelectronic transistors.

BY MARK ANDREWS, Park Systems, Santa Clara, CA

Imagine the not-too-distant future when a NASA spacecraft edges silently into orbit around Mars. Its 473-million-mile journey included a trip around the sun to sling shot itself into in geosynchronous orbit. Its mission: gather new site-specific details and deploy a rover as preludes to the first human mission to the red planet. But before anyone can take ‘one giant leap’, the Mars Path Marker needs to supply fresh data to anxious scientists back on earth.

The probe cost $1.8 billion. Its planning, construction and flight time to Mars took eight years and thousands of work hours from all across the aerospace supply chain.

Red lights are now flashing all across screens back on earth at NASA’s Jet Propulsion Laboratory in Pasadena, California. The probe remains inactive while its earth-side controllers grow frantic. Path Marker should have automatically powered-up for its first mapping transit, but instead hangs quietly above the ruddy Martian landscape.

Screen Shot 2017-07-27 at 10.03.12 AM

Unbeknownst to controllers on earth, Path Marker wasn’t responding because of a short-circuit ‘latch-up’ in its silicon processors. Communications won’t resume for now—maybe not ever.

Earth could not see it happen, but when Path Marker flew around Sol, its passage coincided with an unusually large solar flare on the backside of oursun. More energy than what usually strikes Mars in six months was released in a series of coronal explosions, sending cascades of lethal, heavy ions plowing through Path Marker’s delicate solid- state transistors as if its shielding wasn’t even there.

Despite the best of plans, precautions and preparations, this spacecraft is stuck in perpetual ‘neutral.’ Mission specialistsare trying all availablemission-saving workarounds, but only time will tell.

NASA researcher Dr. Jin-Woo Han hopes to prevent a critical failure in an important mission like this fictional account of the Mars Path Marker. In reality NASA has experienced all types of solid-state electronic failures during its decades of manned and robotic explorations. In his work, Dr. Han documented nine different types of failures in 17 named missions as well as many more that did not cause a mission failure, but impeded or slowed a program.

Although the Mars Path Marker mission is fictional, the need for a better semiconductor technology for deep space exploration is very real. That need is why Dr. Han and colleagues have placed hope in a new approach to solid state transistors that utilizessome of the same principles that gave vacuum tubes their role in humanity’s first electronic products more than 100 years ago.

Han is a scientist at NASA’s Ames Research Center for Nanotechnology in Moffett Field, California. The center is led by Dr. Meyya Meyyappan; Dr. Han leads the vacuum device research team within the 20-person organization. One of his most recent research efforts is tied to his theories and practical applications that leverage the advantages of vacuum for creating better electron flow, but without the drawbacks in existing solid-state technology that NASA frequently faces. The new transistors, called vacuum-channel nanoelectronic devices, are not prone to disruption by cosmic radiation, solar flares, radical temperature changes or similar dangers that can be encountered once a spacecraft (or humans) leave earth’s magnetic fields and dense atmosphere.

The challenges of space exploration are daunting. While loss of life tops many potentially egregious outcomes, damage to spacecraft instruments occurs much more commonly than the general public may realize. This damage remains a source of concentrated research and engineering efforts to mitigate and remedy problems that can lead to lack-luster performance or full system failures. The efforts to ensure safe and productive operation in satellites, probes and spacecraft is second only to the agency’s zeal for keeping human space flight safe.

How can early 20th century vacuum tube technology solve NASA’s very 21st century problems? First of all, the vacuum nanotechnology that NASA is developing is gener- ations beyond conventional vacuum tube engineering as it stood in the early 20th century. But vacuum-channel nanostructures and conventional vacuum tubes share essential functional similarities that make Dr. Han’s devices ideal candidates to replace today’s most robust silicon-based transistors.

Transistors enjoy their role in electronic technology because of their unique abilities to amplify and switch electronic signals as well as electrical power. Power or current applied to one set of terminals controls the current as it flows to another terminal pair (emitters/ collectors). And while a practical solid-state transistor was proposed in 1926 by Canadian researchers, materials science only matured enough for production in 1947; the landmark year in which researchers at the AT&T Bell Labs (New Jersey, USA), and independently a year later in France proposed designs that would become the forefa- thers of today’s microelectronic wonders.

Practical vacuum tube components came into play before 1910, and have several important advantages compared to solid-state transistors including their superior electron mobility. Like their solid-state cousins, tube transistors function by moving electrons unidirectionally from the emitter (a cathode) to be collected by the anode across a vacuum. Tubes fell out of favor for most low and medium power applications due to the advantages of solid-state construction including much smaller size and weight, ruggedness that exceeded old-style tubes, their aggre- gation ability that enabled today’s integrated circuits (ICs), and zero warm-up time – silicon transistors requireno cathode warming function. Solid-state devices also provide substantially greater electrical current efficiency.

It’s easy to see why solid-state electronics won a place in aerospace engineering. But once we actually got into space, we learned quickly that even robust silicon transistors were no match for deep space radiation. To make the best transistors that we had “good enough” for space, NASA mastered the process of creating backup systems and a host of other measures to keep missions on track. It also partnered with other agencies like DARPA (Defense Advanced Research Projects Agency) and the US Department of Defense to develop alternate technologies such as gallium arsenide (GaAs), gallium nitride (GaN), and the latest work from Dr. Han’s nanotech vacuum team. GaAs and GaN are much more robust than silicon, but decades of research have proven them less suitable for construction complex ICs than silicon.

Although conventional solid-state transistors enjoy clear advantages in terrestrial applications, in-space damage typically comes in three forms: instantaneous, cumulative and catastrophic. While the first two effects can frequently be worked around due to NASA’s extensive reliance on back-up systems, catastrophic effects can be “mission enders.”

Dealing with likely and possible performance disruptions costs NASA dearly in terms of extra weight, design time to createmultiple backup systems that can also complicate missions while consuming valuable payload space. Imagine if using a laptop computer on earth required double or even triple the amount of vital components—that laptop would easily be a third larger and more expensive. For NASA, ignoring risks will impede success or in worst-case situations lead to a disaster that costs millions and could even endanger lives if components weretied to a human spaceflight mission.

A common way to deal with these unknowns is to overbuild—create more circuit pathways or entire redundant subsystems because some components will almost certainly be “sacrificed” during encounters with space radiation. NASA frequently must opt for “acceptable” performance instead of what might ideally be possible simply because they cannot count of systems that have optimal performance will remain that way throughout an entire mission.

The advantage a controlled vacuum has in transistors is tied to the fact that solid-state devices can experience long-term failures resulting from additive and cumulative effects from multiple bombardments of ionizing radiation that destroys device features at nanometer scale. This most commonly occurs when the total ionizing dose causes gradual parametric shifts, resulting in on-state current reductions and an increase in off-stage current leakage. A vacuum-based device does not typically suffer from these same effects in part because the absence of material (gases or solids) in the space between emitters and collectors not only speeds the flow of electrons but in essence is protective because there is very little present in this tiny void that might be damaged by ionizing radiation.

Dr. Han’s team studied several different compounds and structures that could be utilized to construct the vacuum channel nano devices that would eventually prove likely successors to conventional transistors. These materials included bulk MOS, silicon-on-insulator (SOI), gate-all- around (GAA) MOSFET and what proved to be the most promising material and design combination, a GAA nanowire in a vacuum gate dielectric (FIGURE 1).

Screen Shot 2017-07-27 at 10.03.21 AM Screen Shot 2017-07-27 at 10.03.27 AM

To be effective and meet NASA’s requirements, new transistor technology had to be manufacturable at industrial scale using existing processes and techniques common to conventional silicon fabs or similar infrastructure. The ideal design would bring the “best of both worlds” together for a solution that is electrically sound, practical and compact as well as lightweight and reliable in the face of exposure to radiation and radical temperature fluctuations.

“But we did not ever approach this as a replacement for all silicon electronics or silicon transistors at large,” said Dr. Han. “While the devices could easily be used on earth—that is where we tested them in gamma radiation chambers after all—but the cost efficiencies of regular silicon MOSFET could not very likely improved by our vacuum-channel nanoelectronic designs.”

To measure device performance Dr. Han and his team employed a Scanning Capacitance Microscope (SCM) with an Atomic Force Microscope (AFM) from Park Systems. They investigated the nanoscale properties of vacuum- channel devices, seeking to ascertain their viability as a transistor while also observing if fabrication method- ology for gate insulators can be controlled.

“SCM with AFM is a powerful combination for investigating transistor devices—together, the two methods provide the user with a non-destructive process of characterizing both charge distribution and surface topography with high spatial resolution and sensitivity,” said Byong Kim, Analytical Systems Director, Park Systems.

Screen Shot 2017-07-27 at 10.03.33 AM

Kim explained that atomic force microscopy with SCM is ideal for investigating transistor designs at the nano scale. Together, the two methods provide researchers with non-destructive processes for characterizing both charge distribution and surface topography with high spatial resolution and sensitivity. In SCM, a metal probe tip and a highly sensitive capacitance sensor augment standard AFM hardware. During testing, voltage is applied between the probe tip and the sample surface. This creates a pair of capacitors in series (when examining metal-oxide-semiconductor devices) from the insulating oxide layer on the device surface and the active depletion layer at the interfacial region located between the oxide layer and doped silicon. Total capacitance is then deter- mined by the thicknesses of the oxide layer as well as the depletion layer, which is influenced by the level of silicon substrate dopingas well as the amount of DC voltage being applied between the tip and device’s surface.

Dr. Han reported that by utilizing scanning capacitance microscopy with a Park Systems atomic force micro- scope the team successfully characterized both the spatial variations in capacitance as well as the topog- raphy of his vacuum-channel nanoelectronic transistors. By examining the line profiles of the topography and capacitance data acquired down an identical path along the device’s source-drain interface, further insight was gained into the relationship between key physical struc- tures and recorded changes in capacitance.

The nanoelectronic device’s topography (at the source- drain interface) was imaged and revealed a vacuum- channel spanning 250 nm in length with peaks and valleys separated by a distance of approximately 5 nm (FIGURES 3-5). The electrical functionality of the device was assessed through the acquisition of a capacitance map. This map revealed a relatively negatively charged (-1.4 to -1.8μV) source-drain terminal and adjacent quantum dot followed by a relatively positively charged vacuum- channel (2μV) and another dot-terminal structure (-1.4 to -1.8μV) on the other end of the source-drain interface. This alternating series of capacitance changes at key structural points suggest that the device is fully capable of functioning as an effective transistor.

Screen Shot 2017-07-27 at 10.03.40 AM Screen Shot 2017-07-27 at 10.03.46 AM Screen Shot 2017-07-27 at 10.03.53 AM

NASA is now working towards next steps to investigate the potential of producing vacuum-channel nanoelectronic devices in higher volumes for further study. The team utilized standard semiconductor manufacturing techniques, so while fabrication is within existing process and materials technologies, settling on the ideal material for the transistors is also still being investigated.

“While the work initially focused on silicon as an under- lying technology, we next want to explore silicon carbide and graphene as alternatives—technologies that are more robust. Also, the charge emission efficiency of silicon may not be sufficient and we saw some degradation due to oxidization,” he remarked. “While we have demonstrated that a silicon vacuum-channel nanoelectronic device is possible. We now need to look at better emitter efficiency and reliability, balanced against ease of manufacturing – everything is a tradeoff in some regards.”

The Ames Research Center is open to partnering through industrial and university collaboration, like the work it has done in conjunction with Park Systems. NASA is already working with additional industrial partners and welcomes further collaboration.

The ‘wonder material’ graphene has many interesting characteristics, and researchers around the world are looking for new ways to utilise them. Graphene itself does not have the characteristics needed to switch electrical currents on and off and smart solutions must be found for this particular problem. “We can make graphene structures with atomic precision. By selecting certain precursor substances (molecules), we can code the structure of the electrical circuit with extreme accuracy,” explains Peter Liljeroth from Aalto University, who conceived the research project together with Ingmar Swart from Utrecht University.

Seamless integration

The electronic properties of graphene can be controlled by synthesizing it into very narrow strips (graphene nanoribbons). Previous research has shown that the ribbon’s electronic characteristics are dependent on its atomic width. A ribbon that is five atoms wide behaves similarly to a metallic wire with extremely good conduction characteristics, but adding two atoms makes the ribbon a semiconductor. “We are now able to seamlessly integrate five atom-wide ribbons with seven atom-wide ribbons. That gives you a metal-semiconductor junction, which is a basic building block of electronic components,” according to Ingmar Swart.

Chemistry on a surface

The researchers produced their electronic graphene structures through a chemical reaction. They evaporated the precursor molecules onto a gold crystal, where they react in a very controlled way to yield new chemical compounds. “This is a different method from that currently used to produce electrical nanostructures, such as those on computer chips. For graphene, it is so important that the structure is precise at the atomic level and it is likely that the chemical route is the only effective method,” Ingmar Swart concludes.

Electronic characteristics

The researchers used advanced microscopic techniques to also determine the electronic and transport characteristics of the resulting structures. It was possible to measure electrical current through a graphene nanoribbon device with an exactly known atomic structure. “This is the first time where we can create e.g. a tunnel barrier and really know its exact atomic structure. Simultaneous measurement of electrical current through the device allows us to compare theory and experiment on a very quantitative level,” says Peter Liljeroth.

Conventional electronic devices make use of semiconductor circuits and they transmit information by electric charges. However, such devices are being pushed to their physical limit and the technology is facing immense challenges to meet the increasing demand for speed and further miniaturisation. Spin wave based devices, which utilise collective excitations of electronic spins in magnetic materials as a carrier of information, have huge potential as memory devices that are more energy efficient, faster, and higher in capacity.

While spin wave based devices are one of the most promising alternatives to current semiconductor technology, spin wave signal propagation is anisotropic in nature – its properties vary in different directions – thus posing challenges for practical industrial applications of such devices.

A research team led by Professor Adekunle Adeyeye from the Department of Electrical and Computer Engineering at the NUS Faculty of Engineering, has recently achieved a significant breakthrough in spin wave information processing technology. His team has successfully developed a novel method for the simultaneous propagation of spin wave signals in multiple directions at the same frequency, without the need for any external magnetic field.

Using a novel structure comprising different layers of magnetic materials to generate spin wave signals, this approach allows for ultra-low power operations, making it suitable for device integration as well as energy-efficient operation at room temperature.

“The ability to propagate spin waves signal in arbitrary directions is a key requirement for actual circuitry implementation. Hence, the implication of our invention is far-reaching and addresses a key challenge for the industrial application of spin wave technology. This will pave the way for non-charge based information processing and realisation of such devices,” said Dr Arabinda Haldar, who is the first author of the study and was formerly a Research Fellow with the Department at NUS. Dr Haldar is currently an Assistant Professor at Indian Institute of Technology Hyderabad.

The research team published the findings of their study in the scientific journal Science Advances on 21 July 2017. This discovery builds on an earlier study by the team that was published in Nature Nanotechnology in 2016, in which a novel device that could transmit and manipulate spin wave signals without the need for any external magnetic field or current was developed. The research team has filed patents for these two inventions.

“Collectively, both discoveries would make possible the on-demand control of spin waves, as well as the local manipulation of information and reprogramming of magnetic circuits, thus enabling the implementation of spin wave based computing and coherent processing of data,” said Prof Adeyeye.

Moving forward, the team is exploring the use of novel magnetic materials to enable coherent long distance spin wave signal transmission, so as to further the applications of spin wave technology.

An international team of physicists, materials scientists and string theoreticians have observed a phenomenon on Earth that was previously thought to only occur hundreds of light years away or at the time when the universe was born. This result could lead to a more evidence-based model for the understanding the universe and for improving the energy-conversion process in electronic devices.

Using a recently discovered material called a Weyl semimetal, similar to 3D graphene, scientists at IBM Research (NYSE: IBM) have mimicked a gravitational field in their test sample by imposing a temperature gradient. The study was supervised by Prof. Kornelius Nielsch, Director at the Leibniz Institute for Materials and Solid State Research Dresden (IFW) and Prof. Claudia Felser, Director at the Max Planck Institute for Chemical Physics of Solids in Dresden.

After conducting the experiment in a cryolab at the University of Hamburg with high magnetic fields, a team of theoreticians from TU Dresden, UC Berkeley and the Instituto de Fisica Teorica UAM/CSIC confirmed with detailed calculations that they observed a quantum effect known as an axial-gravitational anomaly, which breaks one of the classical conservation laws, such as charge, energy and momentum.

This law-breaking anomaly had previously been derived in purely theoretical reasoning with methods based on string theory. It was believed to exist only at extremely high temperatures of trillions of degrees, as an exotic form of matter, called a quark-gluon plasma, at the early stages of the universe deep within the cosmos or created using particle colliders. But to their surprise, the researchers discovered that it also exists on Earth in the properties of solid-state physics, on which much of the computing industry is based on, spanning from tiny transistors to cloud data centers. This discovery is appearing today in the peer-reviewed journal Nature.

“For the first time, we have experimentally observed this fundamental quantum anomaly on Earth which is extremely important towards our understanding of the universe,” said Dr. Johannes Gooth, an IBM Research scientist and lead author of the paper. “We can now build novel solid-state devices based on this anomaly that have never been considered before to potentially circumvent some of the problems inherent in classical electronic devices, such as transistors.”

“This is an incredibly exciting discovery. We can clearly conclude that the same breaking of symmetry can be observed in any physical system, whether it occurred at the beginning of the universe or is happening today, right here on Earth,” said Prof. Dr. Karl Landsteiner, a string theorist at the Instituto de Fisica Teorica UAM/CSIC and co-author of the paper.

IBM scientists predict this discovery will open up a rush of new developments around sensors, switches and thermoelectric coolers or energy-harvesting devices, for improved power consumption.

By Ed Korczynski

Veeco Instruments (Veeco) recently announced that Veeco CNT—formerly known as Ultratech/Cambridge Nanotech—shipped its 500th Atomic Layer Deposition (ALD) system to the North Carolina State University. The Veeco CNT Fiji G2 ALD system will enable the University to perform research for next-generation electronic devices including wearables and sensors. Veeco announced the overall acquisition of Ultratech on May 26 of this year. Executive technologists from Veeco discussed the evolution of ALD technology with Solid State Technology in an exclusive interview just prior to SEMICON West 2017.

Professor Roy Gordon from Harvard University been famous for decades as an innovator in the science of thin-film depositions, and people from his group were part of the founding of Cambridge Nanotech in 2003. Continuity from the original team has been maintained throughout the acquisitions, such that Veeco inherited a lot of process know-how along with the hardware technologies. “Cambridge Nanotech has had a broad history of working with ALD technology,” said Ganesh Sandaren, VP of Veeco CNT Applied Technology, “and that’s been a big advantage for us in working with some major researchers who really appreciate what we’re providing.”

The Figure shows that the company’s ALD chambers have evolved over time from simple single-wafer thermal ALD, to single-wafer plasma-enhance ALD (PEALD), to a large chamber targeting batch processing of up to ten 370 mm x 470 mm (Gen2.5) flat-panels for display applications, and a “large area” chamber capable of 1m x 1.2m substrates for photovoltaic and FPD applications. The large area chamber allows customers to do things like put down an encapsulating layer or an active layer such as buffer materials on CIGS-based solar cells.

Evolution of Atomic-Layer Deposition (ALD) technology starts with single-wafer thermal chambers, adds plasma energy, and then goes to batch processing for manufacturing. (Source: Veeco CNT).

Evolution of Atomic-Layer Deposition (ALD) technology starts with single-wafer thermal chambers, adds plasma energy, and then goes to batch processing for manufacturing. (Source: Veeco CNT).

“There a tendency to think that ALD only belongs in the high-k dielectric application for semiconductor devices, but there are many ongoing applications outside of IC fabs,” reminded Gerry Blumenstock, VP and GM of MBE business unit and Veeco CNT. “Customers who want to do heterogeneous materials develop can now have MBE and ALD in a single tool connected by a vacuum cluster configuration. We have customers today that do not want to break vacuum between processes.” Veeco’s MBE tools are mostly used for R&D, but are also reportedly used for HVM of laser chips.

To date, Cambridge Nanotech tools are generally used by R&D labs, but Veeco is open to the possibility of creating tools for High-Volume Manufacturing (HVM) if customers call for them. “Now that this is part of Veeco, we have the service infrastructure to be able to support end-users in high-volume manufacturing like any of the major OEMs,” said Blumenstock. “It’s an interesting future possibility, but in the next six months to a year we’re focusing on improving our offering to the R&D community. Still, we’re staying close to HVM because if a real opportunity arose there’s no reason we couldn’t get into it.”

In IC fab R&D today, some of the most challenging depositions are of Self-Assembled Monolayers (SAM) that are needed as part of the process-flow to enable Direct Self-Assembly (DSA) of patterns to extend optical lithography to the finest possible device features. SAM are typically created using ALD-type processes, and can also be used to enable selective ALD of more than a monolayer. Veeco-CNT is actively working on SAM in R&D with multiple customers now, and claim that major IC device manufacturers have purchased tools.

At the leading edge of materials R&D, researchers are always experimenting with new chemical precursors. “Having a precursor that has good vapor-pressure, and is reactive yet somewhat stable is what is needed,” reminded Sundaram. “People will generally chose a liquid over a solid precursor because of higher vapor pressure. There are many classes of precursors, and many are halogens but they have disadvantages in some reactions. So we see continue to move to metal-organic precursors, which tend to provide good vapor-pressures and not form undesirable byproducts.”

By Ed Korczynski 

Global industry R&D hub IMEC defines the “IMEC 7nm-Node” (I7N) for finFETs to have 56nm Contacted Gate Pitch (CGP) with 40nm Metal Pitch (MP), and such critical mask layers can be patterned with a single exposure of 0.33 N.A. EUVL as provided by the ASML NXE:3400B tool. To reach IMEC 3nm-Node (I3N) patterning targets of ~40 CGP and ~24 MP, either double exposure of 0.33 N.A. EUVL would be needed or else single-exposure of 0.55 N.A. EUVL as promised by the next-generation ASML tool. All variations of EUVL require novel photoresists and anti-reflective coatings (ARC) to be able to achieve the desired patterning.

The Figure shows that IMEC has led tremendous progress on the photoresists, with best resolution in a single 0.33 N.A. EUVL exposure of 13nm half-pitch (HP) line arrays. The most important parameter for the photoresist is the sensitivity target of 20 mJ/cm2, but at that dosage the best materials seen today have unacceptably high line-width roughness of >5nm three-sigma.

“If you’re talking about lines of 16nm width, for 3-sigma you want to be less than 3nm line-width-roughness,” explained Steegen during the 2017 IMEC Technology Forum. “Smoothing techniques are post-develop technologies that basically reduce line-width-roughness. We are working with many partners, and all are making progress in reducing line-width roughness though post-develop techniques.”

Top-down SEM images of the best achieved EUVL resolutions using 0.33 N.A. stepper and Chemically-Amplified Resist (CAR) or metal-oxide Non-Chemically-Amplified Resist (NCAR) formulations, along with post-development “smoothing” technologies to improve the Line-Width Roughness (LWR) to meet target specifications. (Source: IMEC)

Top-down SEM images of the best achieved EUVL resolutions using 0.33 N.A. stepper and Chemically-Amplified Resist (CAR) or metal-oxide Non-Chemically-Amplified Resist (NCAR) formulations, along with post-development “smoothing” technologies to improve the Line-Width Roughness (LWR) to meet target specifications. (Source: IMEC)

The Figure also shows that IMEC has been working with vacuum deposition companies on atomic-layer deposition (ALD) or chemical-vapor deposition (CVD) processes to ideally take off 2 nm of sidewall roughness. Plasma energy may be capacitively- or inductively-coupled to a vacuum chamber to allow for either PEALD or PECVD processing. Such precise atomic-scale processing may be composed of “dep/etch” sequences of one/few atomic layer depositions followed by light plasma etching such that the nominal line-width would not necessarily change. However, this approach necessitates that the wafer leave the lithography track and move to a separate vacuum-tool.

To save on cost and time, LWR smoothing may be accomplished to some extent today in the litho track by specialized spin-on materials. Companies that supply lithography resolution extension (EXT) materials such as spin-on hard masks (SOHM) and anti-reflective coatings (ARC) have looked at ways spin-on materials can improve the LWR of post-developed resist lines. This can be combined with “shrink” materials that add controlled thicknesses to sidewalls of holes, or with “trim” materials that subtract controlled thicknesses from the sidewalls of lines. Generally, some manner of complex chemical engineering is used to create a film that either forms or breaks bonds when thermally driven by a bake step, and after image transfer to underlying SOHM layers the shrink/trim material is typically stripped in a solvent such as propylene glycol methyl ether acetate (PGMEA).

EUVL photoresists may be based on metal-oxide nano-particles, instead of on extensions to the Chemically-Amplified Resist (CAR) formulations that have been mainstays of ArF/ArFi lithography for decades. Inpria Corp.—the 10-year-old-start-up supported by industry—has ultimately developed a tin-oxide family of blends that are shown as the Non-Chemically-Amplified Resist (NCAR) in the Figure. NCAR metal-oxide resists show similar LWR at similar exposure doses to CARs. However, the metal-oxides in the NCAR can often replace SOHM materials, saving cost and complexity in the resist stack.

IMEC’s work on EUVL with ASML steppers leads to the belief that the source power will increase to allow throughput to rise from today’s ~100 wph to ~120 wph by the end of this year. However, those throughputs assume 20mJ/cm2 resist-speed, and masks may require 30 mJ/cm2 target exposures even with post-develop smoothing steps.

[DISCLOSURE: Ed Korczynski is also Sr. Technology Analyst with TECHCET Group, and author of the Critical Materials Report: Photoresists and Extensions and Ancillaries 2017”.]

By Pete Singer

In order to increase device performance, the semiconductor industry has slowly been implementing many new materials. From the 1960s through the 1990s, only a handful of materials were used, most notably silicon, silicon oxide, silicon nitride and aluminum. Soon, by 2020, more than 40 different materials will be in high-volume production, including more “exotic” materials such as hafnium, ruthenium, zirconium, strontium, complex III-Vs (such as InGaAs), cobalt and SiC.

These new materials create a variety of challenges with regard to process integration (understanding material interface issues, adhesion, stress, cross-contamination, etc.). But they also create new challenges when it comes to material handling.

“As we go through technology node advancements, people are looking at the potential of different materials on the wafer,” notes Clint Haris, Senior Vice President and General Manager of the Microcontamination Control Division at Entegris (Billerica, MA). “They’re looking at different chemicals that are required to clean those materials to reduce defects and improve their operational yield, and what we’re increasingly seeing is that fabs are concerned with the fact that contamination can be introduced in the fluid stream anywhere in that long process flow.”

Haris said that part of their mission at Entegris is to make sure that the entire supply chain – from the development of a chemistry at the supplier to its use on a wafer in a fab – is working in harmony, particularly with regard to any materials that might “touch” the chemicals. “Not only do you want to filter and purify things throughout the whole fluid flow,” he said, “but you want to have that last filtration right before the fluid touches the surface of the wafer.”

The goal of filtration is, of course, to remove contaminants and particles before they reach the wafer, but the exact purity required can be a moving target. “Today we’re seeing a lot of these materials and liquids, which have a parts per trillion purity level, but there’s a desire to move to parts per quadrillion,” Haris said. That’s the equivalent of one drop in all the water that flows over Niagra Falls in one day.

In addition to the filtration challenge of achieving that level, there’s the question of do the analytical tools exist to actually measure contaminants at that level. The answer – not yet. “It’s actually a real issue where some of the metrology tools cannot meet our customers’ needs at those levels, and so one of the things that we’ve done is we’ve developed some techniques internally to enhance the capability of metrology,” Haris said. “We also work on how we prepare our samples so you can detect contamination at those levels.” Because that level of detection is so difficult — in some cases impossible – Haris said fabs are increasingly putting additional filters at the process tool and at the dispense nozzle to “protect against the unknown.”

Earlier this year, Entegris introduced Purasol™, a first-of-its-kind solvent purifier that removes a wide variety of metal microcontaminants found in organic solvents used in ultraclean chemical manufacturing processes. Using tailored membrane technology, the purifier can efficiently remove both dissolved and colloidal metal contaminants from a wide variety of ultra-pure, polar and non-polar solvents. “One of the main things that our customers are seeing is a concern with metal contamination in the photo process that can result in particular defects (see Figure), such as bridge defects,” Haris explained. Increasingly, fabs are moving from just filtration (removing particles) to purification (removing ions and metals), he added.

Illustration of metal contamination inducing defects on lithography process.

Illustration of metal contamination inducing defects on lithography process.

Entegris also recently acquired W. L. Gore & Associates’ water and chemical filtration product line for microelectronics applications. “This is a Teflon-based product line, which is used in ultrapure water filtration for semiconductor fabs, but it’s also a product that we’re selling into some of the fine chemical purification markets for some of the chemistries that are brought into the fabs,” Haris said. “We are focused on new product development and M&A to enhance our capability to support our customers as they overcome these contamination challenges..”

What would a simple technique to remove thin layers from otherwise thick, rigid semiconductor crystals mean for the semiconductor industry? This concept has been actively explored for years, as integrated circuits made on thin layers hold promise for developments including improved thermal characteristics, lightweight stackability and a high degree of flexibility compared to conventionally thick substrates.

In a significant advance, a research group from IBM successfully applied their new “controlled spalling” layer transfer technique to gallium nitride (GaN) crystals, a prevalent semiconductor material, and created a pathway for producing many layers from a single substrate.

As they report in the Journal of Applied Physics, from AIP Publishing, controlled spalling can be used to produce thin layers from thick GaN crystals without causing crystalline damage. The technique also makes it possible to measure basic physical properties of the material system, like strain-induced optical effects and fracture toughness, which are otherwise difficult to measure.

The same 20-micron spalled GaN film, demonstrating the film's flexibility. Credit: Bedell/IBM Research

The same 20-micron spalled GaN film, demonstrating the film’s flexibility. Credit: Bedell/IBM Research

Single-crystal GaN wafers are extremely expensive, where just one 2-inch wafer can cost thousands of dollars, so having more layers means getting more value out of each wafer. Thinner layers also provide performance advantages for power electronics, since it offers lower electrical resistance and heat is easier to remove.

“Our approach to thin film removal is intriguing because it’s based on fracture,” said Stephen W. Bedell, research staff member at IBM Research and one of the paper’s authors. “First, we first deposit a nickel layer onto the surface of the material we want to remove. This nickel layer is under tensile strength — think drumhead. Then we simply roll a layer of tape onto the nickel, hold the substrate down so it can’t move, and then peel the tape off. When we do this, the stressed nickel layer creates a crack in the underlying material that goes down into the substrate and then travels parallel to the surface.”

Their method boils down to simply peeling off the tape, nickel layer and a thin layer of the substrate material stuck to the nickel.

“A good analogy of how remarkable this process is can be made with a pane of glass,” Bedell said. “We’re breaking the glass in the long direction, so instead of a bunch of broken glass shards, we’re left with two full sheets of glass. We can control how much of the surface is removed by adjusting the thickness of the nickel layer. Because the entire process is done at room temperature, we can even do this on finished circuits and devices, rendering them flexible.”

The group’s work is noteworthy for multiple reasons. For starters, it’s by far the simplest method of transferring thin layers from thick substrates. And it may well be the only layer transfer method that’s materially agnostic.

“We’ve already demonstrated the transfer of silicon, germanium, gallium arsenide, gallium nitride/sapphire, and even amorphous materials like glass, and it can be applied at nearly any time in the fabrication flow, from starting materials to partially or fully finished circuits,” Bedell said.

Turning a parlor trick into a reliable process, working to ensure that this approach would be a consistent technique for crack-free transfer, led to surprises along the way.

“The basic mechanism of substrate spalling fracture started out as a materials science problem,” he said. “It was known that metallic film deposition would often lead to cracking of the underlying substrate, which is considered a bad thing. But we found that this was a metastable phenomenon, meaning that we could deposit a thick enough layer to crack the substrate, but thin enough so that it didn’t crack on its own — it just needed a crack to get started.”

Their next discovery was how to make the crack initiation consistent and reliable. While there are many ways to generate a crack — laser, chemical etching, thermal, mechanical, etc. — it turns out that the simplest way, according to Bedell, is to terminate the thickness of the nickel layer very abruptly near the edge of the substrate.

“This creates a large stress discontinuity at the edge of the nickel film so that once the tape is applied, a small pull on the tape consistently initiates the crack in that region,” he said.

Though it may not be obvious, gallium nitride is a vital material to our everyday lives. It’s the underlying material used to fabricate blue, and now white, LEDs (for which the 2014 Nobel Prize in physics was awarded) as well as for high-power, high-voltage electronics. It may also prove useful for inherent biocompatibility, which when combined with control spalling may permit ultrathin bioelectronics or implantable sensors.

“Controlled spalling has already been used to create extremely lightweight, high-efficiency GaAs-based solar cells for aerospace applications and flexible state-of-the-art circuits,” Bedell said.

The group is now working with research partners to fabricate high-voltage GaN devices using this approach. “We’ve also had great interaction with many of the GaN technology leaders through the Department of Energy’s ARPA-E SWITCHES program and hope to use controlled spalling to enable novel devices through future partnerships,” Bedell said.