Tag Archives: letter-wafer-top

Synthetic diamond heat spreaders and GaN-on-Diamond wafers have emerged as a leading thermal-management technology for RF Power Amplifiers

BY THOMAS OBELOER, DANIEL TWITCHEN, JULIAN ELLIS, BRUCE BOLLIGER,
Element Six Technologies, Santa Clara, CA & MARTIN KUBALL AND JULIAN ANAYA, Center for Device Thermography and Reliability (CDTR), H. H. Wills Physics Laboratory, University of Bristol, Bristol, U.K.

GaN-based transistors and their related RF Power Amplifiers (PAs) have emerged as the leading solid-state technology to replace traveling wave tubes in radar, EW (Electronic warfare) systems, and satellite communications, and to replace GaAs transistors in cellular base stations. However, significant thermal limitations prevent GaN PAs from reaching their intrinsic performance capability. Metallized synthetic diamond heat spreaders have recently been used to address this thermal management challenge, particularly in cellular base station and military radar applications.

This article covers several important issues that advanced thermal solutions, particularly for RF power amplifiers, must address. Here, we are presenting new materials, such as CVD (chemical vapor deposition) diamond as a heat spreader to reduce overall package thermal resistance compared to today’s more commonly used materials for thermal management. Also, mounting aspects and some new developments regarding the thermal resistance at the bonding interfaces to diamond heat spreaders are discussed.

CVD diamond

Diamond possesses an extraordinary set of properties including the highest known thermal conductivity, stiffness and hardness, combined with high optical transmission across a wide wavelength range, low expansion coefficient, and low density. These characteristics can make diamond a material of choice for thermal management to significantly reduce thermal resis- tance. CVD diamond is now readily commercially available in different grades with thermal conductivities ranging from 1000 to 2000 W/mK. Also very important is the fact that CVD diamond can be engineered to have fully isotropic characteristics, enabling enhanced heat spreading in all directions. FIGURE 1 shows a comparison of the thermal conductivity of CVD diamond with other materials traditionally used for heat spreading purposes.

FIGURE 1. Comparison of thermal conductivity of CVD diamond and traditional heat spreading materials [1, 2].

FIGURE 1. Comparison of thermal conductivity of CVD diamond and traditional heat spreading materials [1, 2].

On-going development in the technologies to synthesize CVD diamond has enabled it to become readily available in volume at acceptable costs. Unmetallized CVD diamond heat spreaders are available today at a typical volume cost of $1/mm3. Prices vary dependant on the thermal-conductivity grade used. In some instances, system operation at elevated temperatures can reduce both the initial cost of the cooling sub-system and the on-going operating cost as well. When applied with appropriate die-attach methods, diamond heat spreaders provide reliable solutions for semiconductor packages with significant thermal management challenges [1].

Application notes for the use of CVD Diamond

To obtain the most effective use of the extreme properties of CVD Diamond in overall system design, package integration issues need to be carefully considered. Failure to address any one of these issues will result in a sub-optimal thermal solution. Here are the most important points to be considered:

  • Surface preparation
  • Mounting techniques
  • Diamond thickness
  • Functional considerations
  • Metallizations and thermal barrier resistance

Surface preparation: The surfaces of die-level devices have to be machined in a suitable fashion to allow good heat transfer. Surface flatness for heat spreaders should typically be less than 1 micron/mm and the roughness better than Ra < 50 nm, which can be achieved by polishing techniques. Any deficiency in flatness must be compensated for by the mounting techniques which will cause higher thermal resistance.

Mounting techniques: Whereas in some advanced device applications, such as high-power laser diodes, atomic-force bonding techniques are being considered, most applications currently employ soldering techniques for die attachment to the heat spreader. Again, solder layers should be kept to minimum thickness, particularly for the primary TIM1 (thermal interface material (TIM) between die and heat spreader), to minimize thermal resistance. An important factor in applying solder joints is the expansion mismatch between the CVD diamond and the semiconductor material, as it can significantly influence performance and lifetime. GaAs (Gallium Arsenide) devices up to an edge length of 2.5 mm can be hard soldered to CVD diamond without CTE-mismatch problems. (Note that the CTE for CVD Diamond is 1.0 ppm/K at 300K). For edge lengths greater than 2.5 mm, using a soft solder can avoid excessive stresses in the device. TABLE 1 shows a wide range of solder materials commercially available to address various needs for soldering processes.

TABLE 1. Summary of soldering materials [2].

TABLE 1. Summary of soldering materials [2].

Diamond thickness: The thickness of the CVD diamond is important. For devices with small hot spots, such as RF amplifiers or laser diodes, a thickness of 250 to 400 microns is sufficient. Diamond’s isotropic characteristics effectively spread the heat to reduce maximum operation temperature at constant power output. However, applications with larger heat spots on the order of 1 to 10 mm in diameter require thicker diamond for better results. An example is disk lasers that can have an optical output power of several kW and a power density of about 2kW/cm2; a diamond thickness of several mm has proven to be beneficial to disk laser operation [3].

Functional considerations: There are also functional requirements that may be important. One is the electrical conductivity of the heat spreader. For devices such as laser diodes, it is easiest to run the drive current through the device and use the heat spreader for the ground contact. For other devices, the heat spreader is required to be insulating. As CVD diamond is an intrinsic insulator, this insulation can be maintained by keeping the side faces free of metallization. This could be required for RF amplifiers and transistors, especially at higher frequencies (f > 2 GHz).

Thermal simulation helps optimize the heat spreader configuration to find the best solution based on power output needs, material thickness, metallization scheme, heat source geometry and package configuration. For design optimization, it is important that the thermal simulation model includes the complete junction-to-case system, including the device details, all interfaces, materials and the subsequent heat sinking solution.

Metallizations and thermal barrier resistance

Metallizations are an essential component to the application of CVD diamond in RF Amplifier and similar applications. Typically, for reasons of adhesion, mechanical and thermal robustness, three-layer metallization schemes are used. An example of such a three-layer metallization scheme fundamentally comprises: a) a carbide forming metal layer which forms a carbide bonding to the diamond component; b) a diffusion barrier metal layer disposed over the carbide forming metal layer; and c) a surface metal bonding layer disposed over the diffusion barrier metal which provides both a protective layer and a wettable surface layer onto which a metal solder or metal braze can be applied to bond the diamond heat spreader to die and other device components. A particular example of such a three-layer metalli- zation scheme is Ti / Pt / Au.

High-quality, sputter-deposited, thin-film metallizations are strongly recommended for advanced thermal solutions. As thermal contact resistance between the device
and the heat spreader must be minimized, any additional metal interface being added to the system must be avoided. Sputtered layers, especially of titanium, can form a very effective chemical bond with CVD diamond to ensure long-term stability even at elevated temperatures. To separate the required gold attach layer from the titanium adhesion layer, a platinum or titanium/tungsten (TiW) barrier layer is recommended. The Ti/Pt/Au scheme is very commonly used in high-end devices and has excellent characteristics with regards to stability and endurance, even over extended lifetime periods under changing thermal loads. However, this scheme also has a drawback, as the thermal conductivities of the titanium and platinum are relatively low (Tc=22 W/ mK and Tc=70 W/mK respectively). In the search for improved materials to be applied, the use of chromium has been identified as a viable alter- native. Chromium forms a carbide with diamond and is also readily used as a barrier layer, enabling it to perform both functions at a relatively high thermal conductivity of Tc=93.9 W/mk. To test the thermal effectiveness of chromium, samples were prepared at the CDTR (Centre for Device Thermog- raphy and Reliability) at Bristol University comparing a standard Ti/Pt/Au (100/120/500nm) metallization with this novel Cr/Au (100/500nm) configuration. The measurements of the thermal conductivity revealed that the thermal conduc- tivity of the Cr/Au metallization is about 4 times higher as compared to the Ti/Pt/Au. Results are shown in FIGURE 2.

FIGURE 2. Comparison of thermal conductivity of different metallization schemes [4].

FIGURE 2. Comparison of thermal conductivity of different metallization schemes [4].

Application example

To demonstrate the impact of this Cr adhesion/ barrier layer advantage versus Ti/Pt/Au, high power GaN on SiC HEMT (High Electron Mobility Transistor) devices were mounted to a CVD Diamond heat spreader. A cap layer of AuSn with a thickness of 25 microns was chosen. To ensure comparable results for all samples prepared, these samples were placed on a temperature stable platform also made from high thermally conductive diamond material. Results are shown in FIGURE 3: In the left diagram, the base temperature is plotted for increasing power output from the device. As can be seen, the temperature for the Cr/Au configuration is significantly lower, at 9W device power output by about 10 degrees C. On the right hand side, the graph shows the temperature as measured on the transistor channel directly.

FIGURE 3. Temperatures as a function of power for different metallization schemes and solder thickness [4].

FIGURE 3. Temperatures as a function of power for different metallization schemes and solder thickness [4].

In this case, the lower thermal resistivity of the Cr-based metallization layer decreases the channel temperature by more than 20 degrees C at 9W power output.

This significant temperature reduction will result in as much as a 4 times longer lifetime of the device. Alternatively, such devices could be packaged in smaller footprints, at higher power densities, to make use of this increased effectiveness in heat spreading.

Outlook, future developments

One important finding from the above example is the need to modify device architecture for improved thermal management. The main temperature rise is within the device itself. Here, a thinning of the substrate, to bring it closer to the diamond heat spreader, would further enhance the thermal design. Also, mounting such devices with the active layers facing the diamond would provide even further benefit. An example would be the mounting laser diodes p-face down with the quantum well structures soldered directly against the heat spreader. Another way to bring the device gate junction closer to the diamond is the use of a different substrate altogether. This has been demonstrated by using GaN (Gallium Nitride) on diamond wafers, which remove both the Si substrate and transition layers, replacing them instead with CVD diamond [5]. The result brings the diamond material within 1 micron of the heat generating gate junctions. Initial users of GaN-on-diamond wafers for RF HEMT devices have demon- strated as much as 3 times the power density when compared to equivalent GaN/SiC (Silicon Carbide) devices, today’s leading technology for advanced power devices. [6]

Summary

As can be seen, significant thermal-management improvements to electronic systems can be realized by using advanced materials such as CVD diamond. The integration can be relatively straightforward as the diamond heat spreader can be a direct replacement to AlN (Aluminium nitride), BeO (Berillium oxide) or other advanced ceramics. Attention to detail at the interfaces, both in terms of the choice of metals and its thickness, is important to keep overall thermal resistance low and thereby optimizing the effectiveness of the diamond.

As CVD diamond becomes more attractive as a heat spreader through improved synthesis technology, advanced processing and on-going cost reduction efforts, its use in high power density applications has been increasing. It is expected that this trend will be continued in the years to come in line with the ever increasing need for smaller and more powerful electronic devices and systems.

References

1. R. Balmer, B. Bolliger “Integrating Diamond to Maximize Chip Reliability and Performance,“ in Chip Scale Review, July/August 2013, pp. 26 – 30.
2. Internal Element Six Technologies research and report.
3. Element Six internal thermal simulation, C. Bibbe, 2006.
4. GaN-on-Diamond High-Electron-Mobility Transistor – Impact of Contact and Transition Layers, J.Anaya, J.W. Pomeroy, M. Kuball, Center for Device Thermography and Reliability (CDTR), H. H. Wills Physics Laboratory, University of Bristol, BS8 1TL Bristol, U.K.
5. G.D. Via, J.G. Felbinger, J. Blevins, K. Chabak, G. Jessen, J. Gillespie, R. Fitch, A. Crespo, K. Sutherlin,
B. Poling, S. Tetlak, R. Gilbert, T. Cooper, R. Baranyai, J.W. Pomeroy, M. Kuball, J.J. Maurer, and A. Bar-Cohen,
“Wafer-Scale GaN HEMT Performance Enhancement
by Diamond Substrate Integration” in 10th Interna- tional Conference on Nitride Semiconductors, ICNS-10, August 25-30, 2013, Washington DC, USA.
6. M. Tyhach, D. Altman, and S. Bernstein, “Analysis and Characterization of Thermal Transport in GaN HEMTs on SiC and Diamond Substrates”, in GOMACTech 2014, March 31-April 3, 2014, Charleston, SC, USA.

THOMAS OBELOER, DANIEL TWITCHEN, JULIAN ELLIS, BRUCE BOLLIGER, Element Six Technologies, Santa Clara, CA

MARTIN KUBALL AND JULIAN ANAYA, Center For Device Thermography And Reliability (Cdtr), H. H. Wills Physics Laboratory, University Of Bristol, Bristol, U.K. contact: [email protected]

Applied Materials, Inc. and Tokyo Electron Limited today announced that they have agreed to terminate their Business Combination Agreement (BCA). No termination fees will be payable by either party.

The decision came after the U.S. Department of Justice (DoJ) advised the parties that the coordinated remedy proposal submitted to all regulators would not be sufficient to replace the competition lost from the merger. Based on the DoJ’s position, Applied Materials and Tokyo Electron have determined that there is no realistic prospect for the completion of the merger.

“We viewed the merger as an opportunity to accelerate our strategy and worked hard to make it happen,” said Gary Dickerson, president and chief executive officer of Applied Materials. “While we are disappointed that we are not able to pursue this path, our existing growth strategy is compelling. We have been relentlessly driving this strategy forward and we have made significant progress towards our goals. We are delivering results and gaining share in the semiconductor and display equipment markets, while making meaningful advances in areas that represent the biggest and best growth opportunities for us.

“I would like to thank our employees for their focus on delivering results throughout this process. As we move forward, Applied Materials has tremendous opportunities to leverage our differentiated capabilities and technology in precision materials engineering and drive a significant increase in the value we create for our customers and investors.”

By Paula Doe, SEMI

In this 50th year anniversary of Moore’s Law, the steady scaling of silicon chips’ cost and performance that has so changed our world over the last half century is now poised to change it even further through the Internet of Things, in ways we can’t yet imagine, suggests Intel VP of IoT Doug Davis, who will give the keynote at SEMICON West (July 14-16) this year.  Powerful sensors, processors, and communications now make it possible to bring more intelligent analysis of the greater context to many industrial decisions for potentially significant returns, which will drive the first round of serious adoption of the IoT. But there is also huge potential for adding microprocessor intelligence to all sorts of everyday objects and connecting them with outside information, to solve all sorts of real problems, from saving energy to saving babies’ lives. “We see a big impact on the chip industry,” says Davis, noting the needs to deal with highly fragmented markets, as well to reduce power, improve connectivity, and find ways to assure security.

The end of the era of custom embedded designs?

The IoT may mean the end of the era of embedded chips, argues Paul Brody, IBM’s former VP of IoT, who moves to a new job this month, one of the speakers in the SEMICON West TechXPOT program on the impact of the IoT on the semiconductor sector.  Originally, custom embedded solutions offered the potential to design just the desired features, at some higher engineering cost, to reduce the total cost of the device as much as possible. Now, however, high volumes of mobile gear and open Android systems have brought the cost of a loaded system on a chip with a dual core processor, a gigabit of DRAM and GPS down to only $10.  “The SoC will become so cheap that people won’t do custom anymore,” says Brody. “They’ll just put an SoC in every doorknob and window frame.  The custom engineering will increasingly be in the software.”

Security of all these connected devices will require re-thinking as well, since securing all the endpoints, down to every light bulb, is essentially impossible, and supposedly trusted parties have turned out not to be so trustworthy after all. “With these SoCs everywhere, the cost of distributed compute power will become zero,” he argues, noting that will drive systems towards more distributed processing.  One option for security then could be a block chain system like that used by Bit Coin, which allows coordination with no central control, and when not all the players are trustworthy. Instead of central coordination, each message is broadcast to all nodes, and approved by the vote of the majority, requiring only that the majority of the points be trustworthy.

While much of the high volume IoT demand may be for relatively standard, low cost chips, the high value opportunity for chip makers may increasingly be in design and engineering services for the expanding universe of customers. “Past waves of growth were driven by computer companies, but as computing goes into everything this time, it will be makers of things like Viking ranges and Herman Miller office furniture who will driving the applications, who will need much more help from their suppliers,” he suggests.

Intel Graphics

Source: Intel, 2015

Adding context to the data from the tool

The semiconductor industry has long been a leader in connecting things in the factory, from early M2M for remote access for service management and improving overall equipment effectiveness, to the increased automation and software management of 300mm manufacturing, points out Jeremy Read, Applied Materials VP of Manufacturing Services, who’ll be speaking in another SEMICON West 2015 program on how the semiconductor sector will use the IoT. But even in today’s highly connected fabs, the connections so far are still limited to linking individual elements for dedicated applications specifically targeting a single end, such as process control, yield improvement, scheduling or dispatching.  These applications, perhaps best described as intermediate between M2M and IoT, have provided huge value, and have seen enormous growth in complexity. “We have seen fabs holding 50 TB of data at the 45nm node, increasing to 140 TB in 20nm manufacturing,” he notes.

Now the full IoT vision is to converge this operational technology (OT) of connected things in the factory with the global enterprise (IT) network, to allow new ways to monitor, search and manage these elements to provide as yet unachievable levels of manufacturing performance. “However, we’ve learned that just throwing powerful computational resources at terabytes of unstructured data is not effective – we need to understand the shared CONTEXT of the tools, the process physics, and the device/design intent to arrive at meaningful and actionable knowledge,” says Read.  He notes that for the next step towards an “Internet-of-semiconductor-manufacturing-things” we will need to develop the means to apply new analytical and optimizing applications to both the data and its full manufacturing context, to achieve truly new kinds of understanding.

With comprehensive data and complete context information it will become possible to transform the service capability in a truly radical fashion – customer engineers can use the power of cloud computation and massive data management to arrive at insights into the precise condition of tools, potentially including the ability to predict failures or changes in processing capability. “This does require customers to allow service providers to come fully equipped into the fab – not locking out all use of such capabilities,” he says. “If we are to realize the full potential of these opportunities, we must first meet these challenges of security and IP protection.”

Besides these programs on the realistic impact of the IoT on the semiconductor manufacturing technology sector, SEMICON West 2015, July 14-16 in San Francisco, will also feature related programs on what’s coming next across MEMS, digital health, embedded nonvolatile memory, flexible/hybrid systems, and connected/autonomous cars.  

IHS Technology’s final market share results for 2014 reveal that worldwide semiconductor revenues grew by 9.2 percent in 2014 coming in just slightly below the growth projection of 9.4 percent based on preliminary market share data IHS published in December 2014. The year ended on a strong note with the fourth quarter showing 9.7 percent year-over-year growth.  IHS semiconductor market tracking and forecasts mark the fourth quarter of 2014 as the peak of the annualized growth cycle for the semiconductor industry.

Global revenue in 2014 totaled $354.5 billion, up from $324.7 billion in 2013, according to a final annual semiconductor market shares published by IHS Technology). The nearly double-digit percentage increase follows solid growth of 6.6 percent in 2013, a decline of 2.6 percent in 2012 and a marginal increase of 1.3 percent in 2011. The performance in 2014 represents the highest rate of annual growth since the 33 percent boom of 2010.

“While 2014 marked a peak year for semiconductor revenue growth, the health of both the semiconductor supply base and end-market demand, position the industry for another year of strong growth in 2015,” said Dale Ford, vice president and chief analyst at IHS Technology. “Overall semiconductor revenue growth will exceed 5 percent in 2015, and many component categories and markets will see improved growth over 2014.  The more moderate 2015 growth is due primarily to more modest increases in the memory and microcomponent categories.  The dominant share of semiconductor markets will continue to see vibrant growth in 2015.”

More information on this topic can be found in the latest release of the Competitive Landscaping Tool from the Semiconductors & Components service at IHS.

Top ten maneuvers

Intel maintained its strong position as the largest semiconductor supplier in the world followed by Samsung Electronics and Qualcomm at a strong number two and three position in the rankings.  On the strength of its acquisition of MStar, MediaTek jumped into the top 10 replacing Renesas Electronics at number 10.  The other big mover among the top 20, Avago Technologies, also was boosted by an acquisition, moving up nine places to number 14 with its acquisition of LSI in 2014.

Strategic acquisitions continue to play a major role in shaping both the overall semiconductor market rankings and establishing strong leaders in key semiconductor segments.  NXP and Infineon will be competing for positions among the top 10 semiconductor suppliers in 2015 with the boost from their mergers/acquisitions of Freescale Semiconductor and International Rectifier, respectively.

Among the top 25 semiconductor suppliers, 21 companies achieved growth in 2014.  Out of the four companies suffering declines, three are headquartered in Japan as the Japanese semiconductor market and suppliers continue to struggle.

Broad-based growth

As noted in the preliminary market share results, 2014 was one of the healthiest years in many years for the semiconductor industry.  Five of the seven major component segments achieved improved growth compared to 2013 growth. All of the major component markets saw positive growth in 2014.  Out of 128 categories and subcategories tracked by IHS, 73 percent achieved growth in 2014.  The combined total of the categories that did not grow in 2014 accounted for only 8.1 percent of the total semiconductor market.

Out of more than 300 companies included in IHS semiconductor research, nearly 64 percent achieved positive revenue growth in 2014.  The total combined revenues of all companies experiencing revenue declines accounted for only roughly 15 percent of total semiconductor revenues in 2014.

Semiconductor strength

Memory still delivered a strong performance driven by continued strength in DRAM ICs. However, memory market growth declined by a little more than 10 percent compared to the boom year of 2013 with over 28 percent growth in that year.  Growth in sensors & actuators came in only slightly lower than 2013.

Microcomponents achieved the strongest turn around in growth moving from a -1.6 percent decline in 2013 to 8.9 percent growth in 2014.  It also delivered the best growth among the major segments following memory ICs.  Even Digital Signal Processors (DSPs) achieved positive growth in 2014 following strong, double-digit declines in six of the last seven years.  MPUs lead the category with 10.7 percent growth followed by MCUs with 5.4 percent growth.

Every application market delivered strong growth in 2014 with the exception of Consumer Electronics.  Industrial Electronics lead all segments with 17.8 percent growth.  Data Processing accomplished the strongest improvement in growth as it grew 13.7 percent, up nearly 10 percent from 2014.  Of course, MPUs and DRAM played a key role in the strength of semiconductor growth in Data Processing.  The third-strongest segment was Automotive Electronics which was the third segment with double-digit growth at 10 percent.  Only Wireless Communications saw weaker growth in 2014 compared to 2013 as its growth fell by roughly half its 2013 level to 7.8 percent in 2014.

IC Insights will release its April Update to the 2015 McClean Report later this month. The Update includes the final 2014 company sales rankings for the top 50 semiconductor and top 50 IC companies, and the leading IC foundries. Also included are 2014 IC company sales rankings for various IC product segments (e.g., DRAM, MPU, etc.).

In 2014, there were only two Japanese companies—Toshiba and Renesas—that were among the top 10 semiconductor suppliers (Figure 1). Assuming the NXP/Freescale merger is completed later this year, IC Insights forecasts that Toshiba will be the lone Japanese company left in the top 10 ranking. Anyone who has been involved in the semiconductor industry for a reasonable amount of time realizes this is a major shift and a big departure for a country that once was feared and revered when it came to its semiconductor sales presence in the global market.

Fig 1

Fig 1

Figure 1 traces the top 10 semiconductor companies dating back to 1990, when Japanese semiconductor manufacturers wielded their greatest influence on the global stage and held six of the top 10 positions.  The six Japanese companies that were counted among the top 10 semiconductor suppliers in 1990 is a number that has not been matched by any country or region since (although the U.S. had five suppliers in the top 10 in 2014). The number of Japanese companies ranked in the top 10 in semiconductor sales slipped to four in 1995, fell to three companies in 2000 and 2006, and then to only two companies in 2014.

Figure 1 also shows that, in total, the top 10 semiconductor sales leaders are making a marketshare comeback. After reaching a marketshare low of 45 percent in 2006, the top 10 semiconductor sales leaders held a 53 percent share of the total semiconductor market in 2014.  Although the top 10 share in 2014 was eight points higher than in 2006, it was still six points below the 59 percent share they held in 1990.  As fewer suppliers are able to achieve the economies of scale needed to successfully invest and compete in the semiconductor industry, it is expected that the top 10 share of the worldwide semiconductor market will continue to slowly increase over the next few years.

April 2015 marks the 50th anniversary of one of the business world’’s most profound drivers, now commonly referred to as Moore’s Law.  In April 1965, Gordon Moore, later co-founder of Intel, observed that the number of transistors per square inch on integrated circuits would continue to double every year.  This “observation” has set the exponential tempo for five decades of innovation and investment resulting in today’s $336 billion USD integrated circuits industry enabled by the $82 billion USD semiconductor equipment and materials industry (SEMI and SIA 2014 annual totals).

SEMI, the global industry association serving the nano- and micro-electronic manufacturing supply chains, today recognizes the enabling contributions made by the over 1,900 SEMI Member companies in developing semiconductor equipment and materials that produce over 219 billion integrated circuit devices and 766 billion semiconductor units per year (WSTS, 2014).

50 years of Moore’’s Law has led to one of the most technically sophisticated, constantly evolving manufacturing industries operating today.  Every day, integrated circuit (IC) production now does what was unthinkable 50 years ago.  SEMI Member companies now routinely produce materials such as process gases, for example, to levels of 99.994 percent quality for bulk Silane (SiH4) in compliance with the SEMI C3.55 Standard.  Semiconductor equipment manufacturers develop the hundreds of processing machines necessary for each IC factory (fab) that are at work all day, every day, processing more than 100 silicon wafers per hour with fully automated delivery and control – all with standardized interoperability. SEMI Member companies provide the equipment to inspect wafer process results automatically, and find and identify defects at sizes only fractions of the 14nm circuit line elements in today’s chips, ensuring process integrity throughout the manufacturing process.

“”It was SEMI Member companies who enabled Moore’’s Law’’s incredible exponential growth over the last 50 years,”” said Denny McGuirk, president and CEO of SEMI.  “”Whereas hundreds of transistors on an IC was noteworthy in the 1960s, today over 1.3 billion transistors are on a single IC.  SEMI Member companies provide the capital equipment and materials for today’s mega-fabs, with each one processing hundreds or thousands of ICs on each wafer with more than 100,000 wafers processed per month.””

To celebrate SEMI Member companies’ contribution to the 50 years of Moore’s Law, SEMI has produced a series of Infographics that show the progression of the industry.

1971

2015

Price per chip

$351

$393

Price per 1,000 transistors

$150

$0.0003

Number of transistors per chip

2,300

1,300,000,000

Minimum feature size on chip

10,000nm

14nm

From SEMI infographic “Why Moore Matters”: www.semi.org/node/55026

BY JOE CESTARI, Total Facility Solutions, Plano, Texas

When the commercial semiconductor manufacturing industry decides to move to the next wafer size of 450mm, it will be time to re-consider equipment and facilities strategies. Arguably, there is reason to implement new strategies for any new fab to be built regardless of the substrate size. In the case of 450mm, if we merely scale up today’s 300mm layouts and operating modes, the costs of construction would more than double. Our models show that up to 25 percent of the cost of new fab construction could be saved through modular design and point-of-use (POU) facilities, and an additional 5-10 percent could be saved by designing for “lean” manufacturing.

In addition to cost-savings, these approaches will likely be needed to meet the requirements for much greater flexibility in fab process capabilities. New materials will be processed to form new devices, and changes in needed process-flows and OEM tools will have to be accommodated by facilities. In fact, tighter physical and data integration between OEM tools and the fab may result in substantially reduced time to first silicon, ongoing operating costs and overall site footprint.

POU utilities with controls close to the process chambers, rather than in the sub-fab, have been modeled as providing a 25-30 percent savings on instrumentation and control systems throughout the fab. Also, with OEM process chamber specifications for vacuum-control and fluid-purity levels expected to increase, POU utilities provide a flexible way to meet future requirements.

Reduction of fluid purity specifications on central supply systems in harmony with increases in localized purification systems for OEM tools can also help control costs, improve flexibility, and enhance operating reliability. There are two main reasons why our future fabs will need much greater flexibility and intelligence in facilities: high-mix production, and 1-12 wafer lots.

High-mix production

Though microprocessors and memory chips will continue to increase in value and manufacturing volumes, major portions of future demand for ICs will be SoCs for mobile applications. The recently announced “ITRS 2.0”—the next roadmap for the semicon- ductor fab industry after the “2013” edition published early in 2014—will be based on applications solutions and less on simple shrinks of technology. Quoting Gartner Dataquest”s assessment:

System-on-chip (SoC) is the most important trend to hit the semiconductor industry since the invention of microprocessors. SoC is the key technology driving smaller, faster, cheaper electronic systems, and is highly valued by users of semiconductors as they strive to add value to their products.”

1-12 Wafer Lots

The 24-wafer lot may remain the most cost-effective batch size for low-mix fabs, but for high-mix lines 12-wafer lots are now anticipated even for 300mm wafers. For 450mm wafers, the industry needs to re-consider “the wafer is the batch” as a manufacturing strategy. The 2013 ITRS chapter on Factory mentions in Table 5 that by the year 2019 “Single Wafer Lot Manufacturing System as an option” will likely be needed by some fabs. Perhaps a 1-5 wafer carrier and interface would be a way for an Automated Material Handling System (AMHS) to link discrete OEM tools as an evolution of current 300mm FOUP designs.

However, a true single-wafer fab line would be the realization of a revolution started over twenty years ago when the MMST Program was a $100M+ 5-year R&D effort funded by DARPA, the U.S. Air Force, and Texas Instruments, which developed a 0.35μm double-level-metal CMOS fab technology (with a three-day cycle time). In the last decade BlueShift Technologies was started and stopped to provide such revolutionary technology for vacuum-robot-lines to connect single-wafer chambers all with a common physical interface.

Lean manufacturing approaches should work well with high-mix product fabs, in addition to providing more efficient consumption of consumables in general. In specific, when lean manufacturing is combined with small batch sizes—minimally the single wafer—there is tremendous improvement in cycle-time.

Machine learning based advanced analytics for anomaly detection offers powerful techniques that can be used to achieve breakthroughs in yield and field defect rates.

BY ANIL GANDHI, PH. D. and JOY GANDHI, Qualicent Analytics, Inc., Santa Clara, CA

In the last few decades, the volume of data collected in semiconductor manufacturing has grown steadily. Today, with the rapid rise in the number of sensors in the fab, the industry is facing a huge torrent of data that presents major challenges for analysis. Data by itself isn’t useful; for it to be useful it must be converted into actionable information to drive improvements in factory performance and product quality. At the same time, product and process complexities have grown exponentially requiring new ways to analyze huge datasets with thousands of variables to discover patterns that are otherwise undetected by conventional means.

In other industries such as retail, finance, telecom and healthcare where big data analytics is becoming routine, there is widespread evidence of huge dollar savings from application of these techniques. These advanced analytics techniques have evolved through computer science to provide more powerful computing that complements conventional statistics. These techniques are revolutionizing the way we solve process and product problems in the semiconductor supply chain and throughout the product lifecycle. In this paper, we provide an overview of the application of these advanced analytics techniques towards solving yield issues and preventing field failures in semiconductors and electronics.

Advanced data analytics boosts prior methods in achieving breakthrough yields, zero defect and optimizing product and process performance. The techniques can be used as early as product development and all the way through high volume manufacturing. It provides a cost effective observational supplement to expensive DOEs. The techniques include machine learning algorithms that can handle hundreds to thousands of variables in big or small datasets. This capability is indispensable at advanced nodes with complex fab process technologies and product functionalities where defects become intractable.

Modeling target parameters

Machine learning based models provide a predictive model of targets such as yield and field defect rates as functions of process, PCM, sort or final test variables as predictors. In the development phase, the challenge is to eliminate major systematic defect mechanisms and optimize new processes or products to ensure high yields during production ramp. Machine learning algorithms reduce the number of variables from hundreds to thousands to the few key variables of importance; this reduction is just sufficient to allow nonlinear models to be built without over fitting. Using the model, a set of rules involving these key variables are derived. These rules provide the best operating conditions to achieve the target yield or defect rate. FIGURE 1 shows an example non-linear predictive model.

FIGURE 1. Predictive model example.

FIGURE 1. Predictive model example.

FIGURE 2 is another example of rules extracted from a model, showing that when all conditions of the rule are valid across the three predictors simultaneously, then this results in lower yield. Discovering this signal with standard regression techniques failed because of the influence of a large number of manufacturing variables. Each of these large number of variables has a small and negligible influence individually, however they all combine to create noise and thus masking the signal. Standard regression techniques, available in commercial software, therefore are unable to detect the signal in these instances and therefore are not of practical use for process control. So how do we discover the rules such as the ones shown in Fig. 2?

FIGURE 2. Individual parameters M, Q and T do not exert influence while collectively they create conditions that destroy yield. Machine learning methods help discover these conditions.

FIGURE 2. Individual parameters M, Q and T do not exert influence while collectively they create conditions that destroy yield. Machine learning methods help discover these conditions.

Rules discovery

Conventionally, a parametric hypothesis is made based on prior knowledge (process domain knowledge) and then the hypothesis is tested. For example to improve an etest metric such as threshold voltage one could start with a hypothesis that connects this backend parameter with RF power on an etch process in the frontend. However many times it is impossible to make a hypothesis based on domain knowledge because of the complexity of the processes and the variety of possible interactions, especially across several steps. So alternatively, a generalized model with cross terms is proposed and then significant coefficients are picked and the rest are discarded. This works if the number of variables is small but fails with large number of variables. With 1100 variables (a very conservative number for fabs) there are 221 million possible 3-way interactions, and 60 million 2-way cross terms on top of the linear coefficients!

Fitting these coefficients would require a number of samples or records that are clearly not available in the fab. Recognizing that most of the variables and interactions have no bearing on yield, we must then reduce the feature set size (i.e. number of predictors) within a healthy manageable limit (< 15) before we apply any model to it; several machine learning techniques based on derivatives of decision trees are available for feature reduction. Once the feature set is reduced then exact models are developed using a palette of techniques such as those based on advanced variants of piece-wise regression.

In essence, what we have described above is discovery of the hypothesis, while more traditionally one starts with a hypothesis…to be tested. The example in Fig. 2 had 1100 variables most of which had no influence, six of them have measurable influence (three of them are shown), all of these were hard to detect because of dimensional noise.

The above type of technique is part of a group of methods classified as supervised learning. In this type of machine learning, one defines the predictors and target variables and the technique finds the complex relationships or rules governing how the predictors influence the target. In the next example we include the use of unsupervised learning which allows us to discover clusters that reveal patterns and relationships between predictors which can then be connected to the target variables.

FIGURE 3. Solar manufacturing line conveyor, sampled at four points for colorimetry.

FIGURE 3. Solar manufacturing line conveyor, sampled at four points for colorimetry.

FIGURE 3 shows a solar manufacturing line with four panels moving on a conveyor. The end measure of interest that needed improvement was cell efficiency. Measurements are made at the anneal step for each panel as shown at locations 1, 2, 3, 4 in FIGURE 4. The ratio between measurement sites with respect to a key metric called Colorimetry, was discovered to important; the way this was discovered was by employing clustering algorithms, which are part unsupervised learning. This ratio was found in subsequent supervised model to influence PV solar efficiency as part of a 3-way interaction.

FIGURE 4: The ratios between 1, 2, 3, 4 colorimetry were found to have clusters and the clusters corresponded to date separation.

FIGURE 4: The ratios between 1, 2, 3, 4 colorimetry were found to have clusters and the clusters corresponded to date separation.

In this case, without the use of unsupervised machine learning methods, it would have been impossible to identify the ratio between two predictors as an important variable affecting the target because this relationship was not known and therefore no hypothesis could be made for testing it among the large number of metrics and associated statistics that were gathered. Further investigation led to DATE as the determining variable for the clusters.

Ultimately the goal was to create a model for cell efficiency. Feature reduction described earlier is performed followed by advanced piecewise regression and the resulting model based on 10 fold cross validation (build model on 80% of data and test against rest 20% and do this 10 times with a different random sample each time) results in a complex non-linear model with key element that includes a 3 way interaction as shown in FIGURE 5, where the dark green area represents the condition that drops the median efficiency by 30% from best case levels. This condition Colorimetry < 81, Date > X and N2 < 23.5 creates the exclusion zone that should be avoided to improve cell efficiency.

FIGURE 5. N2 (x-axis)  X represent the “bad” condition (dark green) where the median cell efficiency drops by 30% from best case levels.

FIGURE 5. N2 (x-axis) < 23.5, colorimetry < 81 and Date > X represent the “bad” condition (dark green) where the median cell efficiency drops by 30% from best case levels.

Advanced anomaly detection for zero defect

Throughout the production phase, process control and maverick part elimination are key to preventing failures in the field at early life and the rest of the device operating life. This is particularly crucial for automotive, medical device and aerospace applications where field failures can result in loss of life or injury and associated liability costs.

The challenge in screening potential field failures is that these are typically marginal parts that pass individual parameter specifications. With increased complexity and hundreds to thousands of variables, monitoring a handful of parameters individually is clearly insufficient. We present a novel machine learning-based approach that uses a composite parameter that includes the key variables of importance.

Conventional single parameter maverick part elimination relies on robust statistics for single parameter distributions. Each parameter control chart detects and eliminates the outliers but may eliminate good parts as well. Single parameter control charts are found to have high false alarm rates resulting in significant scrap rates of good material.

In this novel machine learning based method, the composite parameter uses a distance measure from the centroid in multidimensional space. Just as in single parameter SPC charts, data points that are farthest from the distribution that cross the limits are maverick and are eliminated. In that sense the implementation of this method is very similar to the conventional SPC charts, while the algorithm complexity is hidden from the user.

FIGURE 6. Comparison of single parameter control chart for the top parameter in the model and Composite Distance Control Chart. The composite distance method detected almost all field failures without sacrificing good parts whereas the top parameter alone is grossly insufficient.

FIGURE 6. Comparison of single parameter control chart for the top parameter in the model and Composite Distance Control Chart. The composite distance method detected almost all field failures without sacrificing good parts whereas the top parameter alone is grossly insufficient.

See FIGURE 6 for a comparison of the single parameter control chart of the top variable of importance versus the composite distance chart. TABLES 1 and 2 show the confusion matrix for these charts. With the single parameter approach, the topmost contributing parameter is able to detect 1 out of 7 field failures. We call this accuracy. However only one out of 21 declared fails is actually a fail – we call this purity of the fail class. Potentially more failures can be detected by lowering the limit somewhat, in the top chart however in that case the purity of the fail class which was already bad now balloons rapidly to unacceptable levels.

TABLE 1. Top Parameter

TABLE 1. Top Parameter

TABLE 2. Composite Parameter

TABLE 2. Composite Parameter

In the composite distance method, on the other hand 6 out of 7 fails are detected – good accuracy. The cost of this detection is also low (high purity) because 6 of 10 declared fails are actually field failures – which is a lot better than 1 out of 21 in the incumbent case and significantly better if the limit in the single top parameter chart was lowered even a little.

We emphasize 2 key advantages of this novel anomaly detection technique. First, the multi-variate nature enables detection of marginal parts that not only pass the specification limits for individual parameters but also are within distribution for all of the parameters taken individually. The composite distance successfully identifies marginal parts that fail in the field. Second, this method significantly reduces the false alarm risk compared to single parameter techniques. This leads to reduction of the cost associated with the “producer’s risk” or beta risk of rejecting good units. In short, better detection of maverick material at lower cost.

Summary and conclusion

Machine learning based advanced analytics for anomaly detection offers powerful techniques that can be used to achieve breakthroughs in yield and field defect rates. These techniques are able to crunch large data sets and hundreds to thousands of variables, overcoming a major limitation with conventional techniques. The two key methods that were explored in this paper key are as follows:

Discovery – This set of techniques provides a predictive model that contains the key variables of importance affecting target metrics such as yield or field defect levels. Rules discovery (a supervised learning technique) among many other methods that we employ, discovers rules that provide the best operating or process conditions to achieve the targets. Or alternatively it identifies exclusion zones that should be avoided to prevent loss of yield and performance. Discovery techniques can be used during early production phase when there is greatest need to eliminate major yield or defect mechanisms to protect the high volume ramp. And of course the techniques are equally applicable in high volume production.

Anomaly Detection – This method based on the unsupervised learning class of techniques, is an effective tool for maverick part elimination. The composite distance process control based on Quali- cent’s proprietary distance analysis method provides a cost effective way for preventing field failures. At leading semiconductor and electronics manufacturers, the method has predicted actual automotive field failures that occurred in top carmakers.

The recent acquisition of Freescale Semiconductor by NXP Semiconductors would catapult the merged entity into the world’s eighth-largest chipmaker, positioning the newly minted giant for an even more formidable presence in key industrial sectors, according to IHS, a global source of critical information and insight.

Prior to the merger, NXP ranked 15th in revenue and Freescale 18th. With combined revenue last year of approximately $10 billion, the resulting new company would have surpassed Broadcom. Only Intel, Samsung Electronics, Qualcomm, SK Hynix, Micron Technology, Texas Instruments and Toshiba would have been bigger, as shown in the table below.

Global Top 10 Semiconductor Makers’ Revenue Share

2014 Company  Revenue Share
Rank
1 Intel 14.14%
2 Samsung Electronics 10.77%
3 Qualcomm 5.46%
4 SK Hynix 4.56%
5 Micron Technology 4.56%
6 Texas Instruments 3.46%
7 Toshiba 2.90%
8 NXP-Freescale (Merged) 2.83%
9 Broadcom 2.38%
10 STMicroelectronics 2.10%

 

“The merged company’s strength will be especially apparent in automotive-specific analog applications,” said Dale Ford, vice president and chief analyst at IHS. “Automotive products clearly will be the biggest convergence resulting from a merged product portfolio of the Dutch-based NXP and its smaller U.S. rival.”

The amalgamated NXP-Freescale would place the company in second place in the area of microcontroller units (MCUs), which are integrated circuits for embedded and automatically controlled applications, including automotive engine-control systems.  The merged company could also affect the digital signal processing (DSP) market, where Texas Instruments reigns supreme. DSPs are an important component in the audio and video handling of digital signals used in myriad applications, including mobile-phone speech transmission, computer graphics and MP3 compression.

“While both NXP and Freescale boast diverse portfolios with complementary products, the high-performance lines of the two chipmakers have very different target solutions,” said Tom Hackenberg, senior analyst for MCUs and microprocessors at IHS.

Freescale has been a key strategic provider of high-reliability automotive, telecomm infrastructure and industrial solutions, including both application-specific and general-purpose products that go after high-performance applications. NXP’s broad portfolio, by comparison, has strategically targeted precision analog and low-power portable-device applications, most of which are directed at portable wireless, automotive infotainment, consumer components and a complementary base of industrial components, including secure MCUs for smart cards. Even in the auto industry, where the two companies both focus on infotainment, their technologies harmonize: NXP dominates the radio market, while Freescale fills a large demand for low- to midrange center-stack processors and instrument cluster controllers.

“The most significant processor competition will likely occur in low-power connectivity solutions, where both chipmakers offer competitive connectivity MCUs,” said Hackenberg. “In particular, the newly merged company will be well-positioned to make groundbreaking advances in the human-machine interface market.”

Freescale recently began developing its portfolio of vision-related intellectual property with Canadian maker CogniVue, used in advanced driver assistance systems (ADAS). For its part, NXP has solid voice-processing expertise. Both companies overall have strong sensor fusion intellectual property, with each maker tending toward different applications. “The resulting combination could offer strategic symmetry in combined vision-, voice- and motion-controlled systems,” Hackenberg added.

Another important aspect of the merger is that Freescale is a near-exclusive source for power architecture processors and processor intellectual property. Although its market share overall is small compared to x86 and ARM, Freescale plays a significant role in the military aerospace industry, where many high-reliability equipment controls rely on power architecture. “While the acquisition of Freescale by a foreign owner is unlikely to be a deal breaker, the development could have some bearing on the approval process in the military, as it will now involve a non-U.S. company possessing ownership of its primary source of military aerospace specific Power Architecture,” Hackenberg noted.

By Douglas G. Sutherland and David W. Price

Author’s Note: This is the fifth in a series of 10 installments exploring the fundamental truths about process control—defect inspection and metrology—for the semiconductor industry. Each article in this series introduces one of the 10 fundamental truths and highlights their implications.

In the last installment we discussed the idea that uncertainty in measurement is part of the process. Anything that degrades the quality of the measurement also degrades the quality of the process because it introduces more variability into the Statistical Process Control (SPC) charts which are windows into the health of the process. In this paper we will expand upon those ideas.

The fifth fundamental truth of process control for the semiconductor IC industry is:

Variability is the Enemy of a Well Controlled Process.

In a wafer fab there are many different types of variability — all of them are bad.

  • Variability in the lot arrival rate, the processing time and the downtime of processing tools, to name just a few sources, all contribute to increased cycle time
  • Variability in the physical features (CD, film thickness, side-wall angle, etc.) contribute to increased leakage current, slower part speed, and yield loss
  • Variability in the defect rate leads to variability in the final yield, in the infant mortality rate, and in long-term reliability
  • Most importantly, variability degrades our ability to monitor small changes in the process – the signal must be greater than the noise in order to be detectable

There is nearly always some way to adjust the average of a given measurement, but the range of values is much harder to control and often much more important. For example, if a man has his feet in an oven and his head in a freezer, his average body temperature may well be 98o F but that fact won’t make him any less dead. Variability kills, and any effort to reduce it is usually time and money well spent.

Variability in Defect Inspection

Figure 1 below shows two simulated SPC charts that monitor the defect count at a given process step. Each chart samples every fifth lot (20 percent lot sampling). Both charts have an excursion at lot number 300 where a defect of interest (DOI) that makes up 10 percent of the total suddenly increases by three-fold. In the left chart the excursion would be caught within 8.5 lots on average, but in the right chart the same excursion would not be caught, on average, until 38.6 lots passed. The only difference is that the chart on the right has twice as much variability.

In general, for an excursion to be caught in a timely fashion it must be large enough to increase the average total defect count by an amount equivalent to three standard deviations of the baseline. If the baseline defect count is very noisy (high variability) then only large excursions will be detectable. Often people think this is the purpose of excursion monitoring: to find the big changes in defectivity. It is not.

KLAT_figures_web_Figure 1 (left) KLAT_figures_web_Figure 1 (right)

 

Figure 1. Two identical SPC charts showing the defect count at a given step but the chart on the left has half the variability of the chart on the right. The excursion at lot number 300 is detected on the left chart within 8.5 lots (on average) but the same excursion is not detected for 38.6 lots on the chart on the right. Increasing the variability by 2x increases the exposed lots by over 4.5x 

In our experience it is nearly always the smaller excursions that cause the most damage simply because they go undetected for prolonged periods of time. The big excursions get a lot of attention and generate a lot of activity but the dollar value of their impact is usually quite small in comparison. It is not uncommon to see low-level excursions cause upwards of $30,000,000 in yield loss. Large excursions are usually identified very quickly and usually result in a few million dollars of loss.

Other sources of variability in inspection data are low capture rate (CR) and poor CR stability. Defect inspection tools that have low CR will inherently have low CR stability. This means that even if the exact same defects could be moved to a different wafer you would not get the same result because of the different background signal from one wafer to the next. This adds significant variability into the SPC chart and can severely impair the ability to detect changes in the defect level.

It’s similar to looking at the stars on two different nights. Sometimes you see them all; sometimes you don’t. The stars are still there—it’s just that the conditions have changed. Something analogous happens with wafers. The exact same defects may be present but the conditions (film stack, CD, overlay, etc.) have changed. An inspection tool with a tunable wavelength allows you to filter out the background noise in the same way that a radio telescope allows you to see through the clouds. Inspection tools with flexible optical parameter settings (wavelength, aperture, polarization, etc.) produce robust inspections that effectively handle changes in background noise and take the variability out of the defect inspection process.

Variability in Metrology

Figure 2 shows two different distributions of critical dimension (CD). The chart of the left shows a distribution that spans the full range from the lower control limit (LCL) all the way to the upper control limit (UCL). Any change in the position of the average will result in some part of the tail extending beyond the one of the control limits.

KLAT_figures_web_Figure 2 (left) KLAT_figures_web_Figure 2 (right)

 

Figure 2. The distribution of CD values. The left chart shows a highly variable process and the right chart shows a process that has low variability.

The right hand chart has much less variability. Not only can the average value change a bit in either direction but there is enough room that one may deliberately choose to shift the position of the center point. Depending on the step this may allow one to tune the speed of the part or make trade-offs between part speed and leakage current.

Up to 10 percent of the breadth of these distributions comes from the CD tool used to measure the value in the first place. Contributions to the variability—total measurement uncertainty (TMU) —come from static precision, dynamic precision, long-term stability and matching. Clearly, metrology tools that have better TMU allow more latitude in the fine tuning of process control. This becomes especially important when using feed forward and/or feedback loops that can compound noise in the measurement process.

Obviously the best way to reduce variability is with the process itself. However, process control tools (inspection and metrology) and process control strategies can contribute to that variability in meaningful ways if they are poorly implemented. Metrology and inspection are the windows into your process: they allow you to see what parts of the process are stable, and more importantly, what parts are changing. The expense of implementing a superior process control strategy is nearly always recouped in terms of reducing variability and making the measurements more sensitive to small changes that can cause the most financial damage.

About the authors:

Dr. David W. Price is a Senior Director at KLA-Tencor Corp. Dr. Douglas Sutherland is a Principal Scientist at KLA-Tencor Corp. Over the last 10 years, Dr. Price and Dr. Sutherland have worked directly with more than 50 semiconductor IC manufacturers to help them optimize their overall inspection strategy to achieve the lowest total cost. This series of articles attempts to summarize some of the universal lessons they have observed through these engagements.

Read more Process Watch:

The most expensive defect

Process Watch: Fab managers don’t like surprises

Process Watch: The 10 fundamental truths of process control for the semiconductor IC industry

Process Watch: Exploring the dark side

The Dangerous Disappearing Defect,” “Skewing the Defect Pareto,” “Bigger and Better Wafers,” “Taming the Overlay Beast,” “A Clean, Well-Lighted Reticle,” “Breaking Parametric Correlation,” “Cycle Time’s Paradoxical Relationship to Yield,” and “The Gleam of Well-Polished Sapphire.”