Category Archives: Blogs

By Douglas G. Sutherland and David W. Price

Author’s Note: The Process Watch series explores key concepts about process control—defect inspection and metrology—for the semiconductor industry. Following the previous installments, which examined the 10 fundamental truths of process control, this new series of articles highlights additional trends in process control, including successful implementation strategies and the benefits for IC manufacturing.

While working at the Guinness® brewing company in Dublin, Ireland in the early-1900s, William Sealy Gosset developed a statistical algorithm called the T-test1. Gosset used this algorithm to determine the best-yielding varieties of barley to minimize costs for his employer, but to help protect Guinness’ intellectual property he published his work under the pen name “Student.” The version of the T-test that we use today is a refinement made by Sir Ronald Fisher, a colleague of Gosset’s at Oxford University, but it is still commonly referred to as Student’s T-test. This paper does not address the mathematical nature of the T-test itself but rather looks at the amount of data required to consistently achieve the ninety-five percent confidence level in the T-test result.

A T-test is a statistical algorithm used to determine if two samples are part of the same parent population. It does not resolve the question unequivocally but rather calculates the probability that the two samples are part of the same parent population. As an example, if we developed a new methodology for cleaning an etch chamber, we would want to show that it resulted in fewer fall-on particles. Using a wafer inspection system, we could measure the particle count on wafers in the chamber following the old cleaning process and then measure the particle count again following the new cleaning process. We could then use a T-test to tell if the difference was statistically significant or just the result of random fluctuations. The T-test answers the question: what is the probability that two samples are part of the same population?

However, as shown in Figure 1, there are two ways that a T-Test can give a false result: a false positive or a false negative. To confirm that the experimental data is actually different from the baseline, the T-test usually has to score less than 5% (i.e. less than 5% probability of a false positive). However, if the T-test scores greater than 5% (a negative result), it doesn’t tell you anything about the probability of that result being false. The probability of false negatives is governed by the number of measurements. So there are always two criteria: (1) Did my experiment pass or fail the T-test? (2) Did I take enough measurements to be confident in the result? It is that last question that we try to address in this paper.

Figure 1. A “Truth Table” highlights the two ways that a T-Test can give the wrong result.

Figure 1. A “Truth Table” highlights the two ways that a T-Test can give the wrong result.

Changes to the semiconductor manufacturing process are expensive propositions. Implementing a change that doesn’t do anything (false positive) is not only a waste of time but potentially harmful. Not implementing a change that could have been beneficial (false negative) could cost tens of millions of dollars in lost opportunity. It is important to have the appropriate degree of confidence in your results and to do so requires that you use a sample size that is appropriate for the size of the change you are trying to affect. In the example of the etch cleaning procedure, this means that inspection data from a sufficient number of wafers needs to be collected in order to determine whether or not the new clean procedure truly reduces particle count.

In general, the bigger the difference between two things, the easier it is to tell them apart. It is easier to tell red from blue than it is to distinguish between two different shades of red or between two different shades of blue. Similarly, the less variability there is in a sample, the easier it is to see a change2. In statistics the variability (sometimes referred to as noise) is usually measured in units of standard deviation (σ). It is often convenient to also express the difference in the means of two samples in units of σ (e.g., the mean of the experimental results was 1σ below the mean of the baseline). The advantage of this is that it normalizes the results to a common unit of measure (σ). Simply stating that two means are separated by some absolute value is not very informative (e.g., the average of A is greater than the average of B by 42). However, if we can express that absolute number in units of standard deviations, then it immediately puts the problem in context and instantly provides an understanding of how far apart these two values are in relative terms (e.g., the average of A is greater than the average of B by 1 standard deviation).

Figure 2 shows two examples of data sets, before and after a change. These can be thought of in terms of the etch chamber cleaning experiment we discussed earlier. The baseline data is the particle count per wafer before the new clean process and the results data is the particle count per wafer after the new clean procedure. Figure 2A shows the results of a small change in the mean of a data set with high standard deviation and figure 2B shows the results of the same sized change in the mean but with less noisy data (lower standard deviation). You will require more data (e.g., more wafers inspected) to confirm the change in figure 2A than in figure 2B simply because the signal-to-noise ratio is lower in 2A even though the absolute change is the same in both cases.

Figure 2. Both charts show the same absolute change, before and after, but 2B (right) has much lower standard deviation. When the change is small relative to the standard deviation as in 2A (left) it will require more data to confirm it.

Figure 2. Both charts show the same absolute change, before and after, but 2B (right) has much lower standard deviation. When the change is small relative to the standard deviation as in 2A (left) it will require more data to confirm it.

The question is: how much data do we need to confidently tell the difference? Visually, we can see this when we plot the data in terms of the Standard Error (SE). The SE can be thought of as the error in calculating the average (e.g., the average was X +/- SE). The SE is proportional to σ/√n where n is the sample size. Figure 3 shows the SE for two different samples as a function of the number of measurements, n.

Figure 3. The Standard Error (SE) in the average of two samples with different means. In this case the standard deviation is the same in both data sets but that need not be the case. With greater than x measurements the error bars no longer overlap and one can state with 95% confidence that the two populations are distinct.

Figure 3. The Standard Error (SE) in the average of two samples with different means. In this case the standard deviation is the same in both data sets but that need not be the case. With greater than x measurements the error bars no longer overlap and one can state with 95% confidence that the two populations are distinct.

For a given difference in the means and a given standard deviation we can calculate the number of measurements, x, required to eliminate the overlap in the Standard Errors of these two measurements (at a given confidence level).

The actual equation to determine the correct sample size in the T-test is given by,

Equation 1

Equation 1

where n is the required sample size, “Delta” is the difference between the two means measured in units of standard deviation (σ) and Zx is the area under the T distribution at probability x. For α=0.05 (5% chance of a false positive) and β=0.95 (5% chance of a false negative), Z1-α/2 and Zβ are equal to 1.960 and 1.645 respectively (Z values for other values of α and β are available in most statistics textbooks, Microsoft® Excel® or on the web). As seen in Figure 3 and shown mathematically in Eq 1, as the difference between the two populations (Delta) becomes smaller, the number of measurements required to tell them apart will become exponentially larger. Figure 4 shows the required sample size as a function of the Delta between the means expressed in units of σ. As expected, for large changes, greater than 3σ, one can confirm the T-test 95% of the time with very little data. As Delta gets smaller, more measurements are required to consistently confirm the change. A change of only one standard deviation requires 26 measurements before and after, but a change of 0.5σ requires over 100 measurements.

Figure 4. Sample size required to confirm a given change in the mean of two populations with 5% false positives and 5% false negatives

Figure 4. Sample size required to confirm a given change in the mean of two populations with 5% false positives and 5% false negatives

The relationship between the size of the change and the minimum number of measurements required to detect it has ramifications for the type of metrology or inspection tool that can be employed to confirm a given change. Figure 5 uses the results from figure 4 to show the time it would take to confirm a given change with different tool types. In this example the sample size is measured in number of wafers. For fast tools (high throughput, such as laser scanning wafer inspection systems) it is feasible to confirm relatively small improvements (<0.5σ) in the process because they can make the 200 required measurements (100 before and 100 after) in a relatively short time. Slower tools such as e-beam inspection systems are limited to detecting only gross changes in the process, where the improvement is greater than 2σ. Even here the measurement time alone means that it can be weeks before one can confirm a positive result. For the etch chamber cleaning example, it would be necessary to quickly determine the results of the change in clean procedure so that the etch tool could be put back into production. Thus, the best inspection system to determine the change in particle counts would be a high throughput system that can detect the particles of interest with low wafer-to-wafer variability.

Figure 5. The measurement time required to determine a given change for process control tools with four different throughputs (e-Beam, Broadband Plasma, Laser Scattering and Metrology)

Figure 5. The measurement time required to determine a given change for process control tools with four different throughputs (e-Beam, Broadband Plasma, Laser Scattering and Metrology)

Experiments are expensive to run. They can be a waste of time and resources if they result in a false positive and can result in millions of dollars of unrealized opportunity if they result in a false negative. To have the appropriate degree of confidence in your results you must use the correct sample size (and thus the appropriate tools) that correspond to the size of the change you are trying to affect.

References:

  1. https://en.wikipedia.org/wiki/William_Sealy_Gosset
  2. Process Watch: Know Your Enemy, Solid State Technology, March 2015

About the Authors:

Dr. David W. Price is a Senior Director at KLA-Tencor Corp. Dr. Douglas Sutherland is a Principal Scientist at KLA-Tencor Corp. Over the last 10 years, Dr. Price and Dr. Sutherland have worked directly with more than 50 semiconductor IC manufacturers to help them optimize their overall inspection strategy to achieve the lowest total cost. This series of articles attempts to summarize some of the universal lessons they have observed through these engagements.

By Chet Lenox, David W. Price and Douglas G. Sutherland

Author’s Note: The Process Watch series explores key concepts about process control—defect inspection and metrology—for the semiconductor industry. Following the previous installments, which examined the 10 fundamental truths of process control, this new series of articles highlights additional trends in process control, including successful implementation strategies and the benefits for IC manufacturing. For this article, we are pleased to include insights from our guest author and colleague at KLA-Tencor, Chet Lenox.

In order to maximize the profitability of an IC manufacturer’s new process node or product introduction, an early and fast yield ramp is required. Key to achieving this rapid yield ramp is the ability to provide quality and actionable data to the engineers making decisions on process quality and needed improvements.

The data used to make these decisions comes in two basic forms:

  • Inline inspection and metrology results
  • End-of-line (EOL) parametric testing, product yield results and failure-analysis

Inline inspection and metrology serve as the primary source of data for process engineers, enabling quick identification of excursions and implementation of corrective actions. End-of-line results serve as a metric of any process flow’s ability to produce quality product, generating transistor parametrics, yield sub-binning and physical failure analysis (PFA) data that provide insight into process quality and root-cause mechanisms.

In general, a fab is better off financially by finding and fixing problems inline versus end-of-line1 due to the long delay between wafer processing and collection of EOL data. However, EOL results are a critical component in understanding how specific inline defects correlate to product performance and yield, particularly during early process development cycles. Therefore, the ideal yield improvement methodology relies on inline inspection and metrology for excursion monitoring and process change qualification, while EOL results are used only for the validation of yield improvement changes.

In order for this scenario to be achieved, inline data must be high quality with appropriate sampling, and a clear correlation must be established between inline results and EOL yield. One key tool that is often utilized to achieve this connection is hitback analysis. Hitback analysis is the mapping of EOL electrical failure and PFA locations to inline defect locations identified by inspection tools.

Hitback analysis comes in two basic forms. In the traditional method, EOL yield failures guide PFA, often in the form of a cross-section transmission electron microscope (TEM) confirmation of a physical defect. This physical location is then overlaid against inline defect locations for correlation to inline learning. This analysis often offers clear causality for yield failures, but is slow (dozens/week) and can be blind to defect modes that are difficult to locate or image in TEM.

The second method, which is growing in popularity, is to overlay the EOL electrical failure location directly to inline defect data (figure 1). This is largely enabled by modern logic design methods and analysis tools that allow electrical failures to be localized into “chain” locations where the failure is likely to occur. Furthermore, new technologies allow inline inspection to be guided to potential chain location failures based purely on design layout.

For example, KLA-Tencor’s broadband plasma optical patterned wafer inspection systems incorporate patented technologies (NanoPoint™, pin•point™) that leverage design data to define very tiny inspection areas focused solely on critical patterns.2,3,4 Using these design-based technologies to inspect patterns related to potential chain failures produces inspection results consisting of defects that are strongly correlated to end-of-line yield. This more direct technique allows for faster turn-around on analysis, enables higher sampling (hundreds of defects/wafer) and can provide successful causality on defect modes that are difficult to find physically at EOL.

Figure 1. Hitback analysis technique where likely die fail chain locations from EOL are overlaid with inline inspection results.

Figure 1. Hitback analysis technique where likely die fail chain locations from EOL are overlaid with inline inspection results.

To achieve successful direct hitback analysis from electrical fail chains to inline defect locations, a number of methodologies are helpful:

  • Wafers that will be used for hitback analysis should be inspected at all key process steps. This avoids “holes” in potential causality to the EOL failure
  • Geometry-based overlay algorithms should be used that combine the point-based inline defect location with area-based reporting of EOL chains
  • The overlay distance allowed to label a chain-to-defect distance a “hit” must be large enough to allow for inspection tool defect location accuracy (DLA) but small enough that the statistical probability of false-positives is low; see Figure 2
  • All defects found by the inspector should be used for analysis, not just defects that are classified by subsequent review steps
  • Electrical fail chain locations should utilize layer information as well as x/y mapping
Figure 2. The threshold used to overlay EOL electrical chains to inline defects must be optimized to avoid failures or false positives.

Figure 2. The threshold used to overlay EOL electrical chains to inline defects must be optimized to avoid failures or false positives.

When performed properly, the hitback capture rate metric (in percentage) will quantify the number of fails which “hitback” to inline defects. This metric can be used broadly as an indicator of inline inspection capability, with higher numbers indicating that inline inspection can be more confidently used in yield improvement efforts. Therefore, hitback analysis should be performed as early as possible in the development cycle and new product introduction timescale. This allows time for inline defect inspection capture rate improvement through these traditional methods:

  • Inspection tool and recipe improvement, including the use of guided inspection based on product layout
  • Lot-, wafer- and die-level sampling adjustments
  • Process step inspection location optimization

When performed regularly, hitback analysis greatly assists in improving inline inspection confidence and improves yield learning speed. Hitback capture rates increasing to more than 70 percent are not uncommon for effective inline monitoring schemes. It is worth mentioning that the slower EOL PFA Pareto generation and hitback analysis is still required even when direct EOL-to-inline is performed in order to validate the chain fails and hitback capture rate.

Yield ramp rate is often the primary factor in the profitability of a fab’s new process and new product introduction. This ramp rate is strongly influenced by the effectiveness of inline wafer inspection, allowing faster information turns and quicker decision making by process engineers. Hitback analysis is a key method for gauging the effectiveness of inline inspection and for driving inspection improvements, particularly when correlating EOL electrical chain failures to inline defect results.

References:

About the Authors:

Dr. Chet Lenox, Dr. David W. Price and Dr. Douglas Sutherland are Yield Consultant, Senior Director, and Principal Scientist, respectively, at KLA-Tencor Corp. Dr. Lenox, Dr. Price and Dr. Sutherland have worked directly with many semiconductor IC manufacturers to help them optimize their overall inspection strategy to achieve the lowest total cost. This series of articles attempts to summarize some of the universal lessons they have observed through these engagements.

By David W. Price and Douglas G. Sutherland

Author’s Note: The Process Watch series explores key concepts about process control—defect inspection and metrology—for the semiconductor industry. Following the previous installments, which examined the 10 fundamental truths of process control, this new series of articles highlights additional trends in process control, including successful implementation strategies and the benefits for IC manufacturing. 

Introduction

In a previous Process Watch article [1], we showed that big excursions are usually easy to detect but finding small excursions requires a combination of high capture rate and low noise. We also made the point that, in our experience, it’s usually the smaller excursions which end up costing the fab more in lost product. Catastrophic excursions have a large initial impact but are almost always detected quickly. By contrast, smaller “micro-excursions” sometimes last for weeks, exposing hundreds or thousands of lots to suppressed yield.

Figure 1 shows an example of a micro-excursion. For reference, the top chart depicts what is actually happening in the fab with an excursion occurring at lot number 300. The middle chart shows the same excursion through the eyes of an effective inspection strategy; while there is some noise due to sampling and imperfect capture rate, it is generally possible to identify the excursion within a few lots. The bottom chart shows how this excursion would look if the fab employed a compromised inspection strategy—low capture rate, high capture rate variability, or a large number of defects that are not of interest; in this case, dozens of lots are exposed before the fab engineer can identify the excursion with enough confidence to take corrective action.

Figure 1. Illustration of a micro-excursion. Top: what is actually happening in the fab. Middle: the excursion through the lens of an effective control strategy (average 2.5 exposed lots). Bottom: the excursion from the perspective of a compromised inspection strategy (~40 exposed lots).

Figure 1. Illustration of a micro-excursion. Top: what is actually happening in the fab. Middle: the excursion through the lens of an effective control strategy (average 2.5 exposed lots). Bottom: the excursion from the perspective of a compromised inspection strategy (~40 exposed lots).

Unfortunately, the scenario depicted in the bottom of Figure 1 is all too common. Seemingly innocuous cost-saving tactics such as reduced sampling or using a less sensitive inspector can quickly render a control strategy to be ineffective [2]. Moreover, the fab may gain a false sense of security that the layer is being effectively monitored by virtue of its ability to find the larger excursions. 

Micro-Excursions 

Table 1 illustrates the difference between catastrophic and micro-excursions. As the name implies, micro-excursions are subtle shifts away from the baseline. Of course, excursions may also take the form of anything in between these two.

Table 1: Catastrophic vs. Micro-Excursions

Table 1: Catastrophic vs. Micro-Excursions

Such baseline shifts happen to most, if not all, process tools—after all, that’s why fabs employ rigorous preventative maintenance (PM) schedules. But PM’s are expensive (parts, labor, lost production time), therefore fabs tend to put them off as long as possible.

Because the individual micro-excursions are so small, they are difficult observe from end-of-line (EOL) yield data. They are frequently only seen in EOL yield data through the cumulative impact of dozens of micro-excursions occurring simultaneously; even then it more often appears to be baseline yield loss. As a result, fab engineers sometimes use the terms “salami slicing” or “penny shaving” since these phrases describe how a series of many small actions can, as an accumulated whole, produce a large result [3].

Micro-excursions are typically brought to an end because: (a) a fab detects them and puts the tool responsible for the excursion down; or, (b) the fab gets lucky and a regular PM resolves the problem and restores the tool to its baseline. In the latter case, the fab may never know there was a problem.

The Superposition of Multiple Simultaneous Micro-Excursions

To understand the combined impact of these multiple micro-excursions, it is important to recognize:

  1. Micro-excursions on different layers (different process tools) will come and go at different times
  2. Micro-excursions have different magnitudes in defectivity or baseline shift
  3. Micro-excursions have different durations

In other words, each micro-excursion has a characteristic phase, amplitude and wavelength. Indeed, it is helpful to imagine individual micro-excursions as wave forms which combine to create a cumulative wave form. Mathematically, we can apply the Principle of Superposition [4] to model the resulting impact on yield from the contributing micro-excursions.

Figure 2 illustrates the cumulative effect of one, five, and 10 micro-excursions happening simultaneously in a 1,000 step semiconductor process. In this case, we are assuming a baseline yield of 90 percent, that each micro-excursion has a magnitude of 2 percent baseline yield loss, and that they are detected on the 10th lot after it starts. As expected, the impact of a single micro-excursion is negligible but the combined impact is large.

Figure 2. The cumulative impact of one, five, and 10 simultaneous micro-excursions happening in a 1,000 step process: increased yield loss and yield variation.

Figure 2. The cumulative impact of one, five, and 10 simultaneous micro-excursions happening in a 1,000 step process: increased yield loss and yield variation.

It is interesting to note that the bottom curve in Figure 2 would seem to suggest that the fab is suffering from a baseline yield problem. However, what appears to be 80 percent baseline yield is actually 90 percent baseline yield with multiple simultaneous micro-excursions, which brings the average yield down to 80 percent. This distinction is important since it points to different approaches in how the fab might go about improving the average yield. A true baseline yield problem would suggest that the fab devote resources to run experiments to evaluate potential process improvements (design of experiments (DOEs), split lot experiments, failure analysis, etc.). These activities would ultimately prove frustrating as the engineers would be trying to pinpoint a dozen constantly-changing sources of yield loss.

The fab engineer who correctly surmises that this yield loss is, in fact, driven by micro-excursions would instead focus on implementing tighter process tool monitoring strategies. Specifically, they would examine the sensitivity and frequency of process tool monitor inspections; depending on the process tool, these monitors could be bare wafer inspectors on blanket wafers and/or laser scanning inspectors on product wafers. The goal is to ensure these inspections provide timely detection of small micro-excursions, not just the big excursions.

The impact of an improved process tool monitoring strategy can be seen in Figure 3. By improving the capture rate (sensitivity), reducing the number of non-critical defects (by doing pre/post inspections or using an effective binning routine), and reducing other sources of noise, the fab can bring the exposed product down from 40 lots to 2.5 lots. This, in turn, significantly reduces the yield loss and yield variation.

Figure 3. The impact of 10 simultaneous micro-excursions for the fab with a compromised inspection strategy (brown curve, ~40 lots at risk), and a fab with an effective process tool monitoring strategy (blue curve, ~2.5 lots at risk).

Figure 3. The impact of 10 simultaneous micro-excursions for the fab with a compromised inspection strategy (brown curve, ~40 lots at risk), and a fab with an effective process tool monitoring strategy (blue curve, ~2.5 lots at risk).

Summary

Most fabs do a good job of finding the catastrophic defect excursions. Micro-excursions are much more common and much harder to detect. There are usually very small excursions happening simultaneously at many different layers that go completely undetected. The superposition of these micro-excursions leads to unexplained yield loss and unexplained yield variation.

As a yield engineer, you must be wary of this. An inspection strategy that guards only against catastrophic excursions can create the false sense of security that the layer is being effectively monitored—when in reality you are missing many of these smaller events that chip away or “salami slice” your yield.

References:

About the Author: 

Dr. David W. Price is a Senior Director at KLA-Tencor Corp. Dr. Douglas Sutherland is a Principal Scientist at KLA-Tencor Corp. Over the last 10 years, Dr. Price and Dr. Sutherland have worked directly with more than 50 semiconductor IC manufacturers to help them optimize their overall inspection strategy to achieve the lowest total cost. This series of articles attempts to summarize some of the universal lessons they have observed through these engagements.

Solid State Technology announced today that its premier semiconductor manufacturing conference and networking event, The ConFab, will be held at the iconic Hotel del Coronado in San Diego on May 14-17, 2017. A 30% increase in attendance in 2016 with a similar uplift expected in 2017, makes the venue an ideal meeting location as The ConFab continues to expand.

    

For more than 12 years, The ConFab, an invitation-only executive conference, has been the destination for key industry influencers and decision-makers to connect and collaborate on critical issues.

“The semiconductor industry is maturing, yet opportunities abound,” said Pete Singer, Editor-in-Chief of Solid State Technology and Conference Chair of The ConFab. “The Internet of Things (IoT) is exploding, which will result in a demand for “things” such as sensors and actuators, as well as cloud computing. 5G is also coming and will be the key technology for access to the cloud.”

The ConFab is the best place to seek a deeper understanding on these and other important issues, offering a unique blend of market insights, technology forecasts and strategic assessments of the challenges and opportunities facing semiconductor manufacturers. “In changing times, it’s critical for people to get together in a relaxed setting, learn what’s new, connect with old friends, make new acquaintances and find new business opportunities,” Singer added.

Dave Mount

David Mount

Solid State Technology is also pleased to announce the addition of David J. Mount to The ConFab team as marketing and business development manager. Mount has a rich history in the semiconductor manufacturing equipment business and will be instrumental in guiding continued growth, and expanding into new high growth areas.

Mainstream semiconductor technology will remain the central focus of The ConFab, and the conference will be expanded with additional speakers, panelists, and VIP attendees that will participate from other fast growing and emerging areas. These include biomedical, automotive, IoT, MEMS, LEDs, displays, thin film batteries, photonics and advanced packaging. From both the device maker and the equipment supplier perspective, The ConFab 2017 is a must-attend networking conference for business leaders.

The ConFab conference program is guided by a stellar Advisory Board, with high level representatives from GLOBALFOUNDRIES, Texas Instruments, TSMC, Cisco, Samsung, Intel, Lam Research, KLA-Tencor, ASE, NVIDIA, the Fab Owners Association and elsewhere.

Details on the invitation-only conference are at: www.theconfab.com. For sponsorship inquiries, contact Kerry Hoffman at [email protected]. For details on attending as a guest or qualifying as a VIP, contact Sally Bixby at [email protected].

By David W. Price, Douglas G. Sutherland and Kara L. Sherman

Author’s Note: The Process Watch series explores key concepts about process control—defect inspection and metrology—for the semiconductor industry. Following the previous installments, which explored the 10 fundamental truths of process control, this new series of articles highlights additional trends in process control, including successful implementation strategies and the benefits for IC manufacturing. For this article, we are pleased to include insights from our guest author, Kara Sherman.

As we celebrate Earth Day 2016, we commend the efforts of companies who have found ways to reduce their environmental impact. In the semiconductor industry, fabs have been building Leadership in Energy and Environmental Design (LEED)-certified buildings [1] as part of new fab construction and are working with suppliers to directly reduce the resources used in fabs on a daily basis.

As IC manufacturers look for more creative ways to reduce environmental impact, they are turning to advanced process control solutions to reduce scrap and rework, thereby reducing fab resource consumption. Specifically, fabs are upgrading process control solutions to be more capable and adding additional process control steps; both actions reduce scrap and net resource consumption per good die out (Figure 1).

Figure 1. The basic equation for improving a fab’s environmental performance includes reducing resource use and increasing yield. Capable process control solutions help fabs do both by identifying process issues early thereby reducing scrap and rework.

Figure 1. The basic equation for improving a fab’s environmental performance includes reducing resource use and increasing yield. Capable process control solutions help fabs do both by identifying process issues early thereby reducing scrap and rework.

Improved process control performance

Process control is used to identify manufacturing excursions, providing the data necessary for IC engineers to make production wafer dispositioning decisions and to take the corrective actions required to fix process issues.

For example, if after-develop inspection (ADI) data indicate a high number of bridging defects on patterned wafers following a lithography patterning step, the lithography engineer can take several corrective actions. In addition to sending the affected wafers back through the litho cell for rework, the engineer will stop production through the litho cell to fix the underlying process issue causing the yield-critical bridging defects. This quick corrective action limits the amount of material impacted and potentially scrapped.

To be effective, however, the quality of the process control measurement is critical. If an inspection or metrology tool has a lower capture rate or higher total measurement uncertainty (TMU), it can erroneously flag an excursion (false alarm), sending wafers for unnecessary rework, causing additional consumption of energy and chemicals and production of additional waste. Alternatively, if the measurement fails to identify a true process excursion, the yield of the product is negatively impacted and more dies are scrapped—again, resulting in less desirable environmental performance.

The example shown in Figure 2 examines the environmental impact of the process control data produced by two different metrology tools in the lithography cell. By implementing a higher quality metrology tool, the quality of the process control data is improved and the lithography engineers are able to make better process decisions resulting in a 0.1 percent reduction in unnecessary rework in the litho cell. This reduced rework results in a savings of approximately 0.5 million kWh of power and 2.4 million liters of water for a 100k WSPM fab—and a proportional percentage reduction in the amount of resist and clean chemicals consumed.

Figure 2. Higher quality process control tools produce better process control data within the lithography cell, enabling a 0.1 percent reduction in unnecessary rework that results in better environmental performance.

Figure 2. Higher quality process control tools produce better process control data within the lithography cell, enabling a 0.1 percent reduction in unnecessary rework that results in better environmental performance.

As a result of obtaining increased yield and reduced scrap, many fabs have upgraded the capability of their process control systems. To drive further improvements in environmental performance, fabs can benefit from utilizing the data generated by these capable process control systems in new ways.

Traditionally, the data generated by metrology systems have been utilized in feedback loops. For example, advanced overlay metrology systems identify patterning errors and feed information back to the lithography module and scanner to improve the patterning of future lots. These feedback loops have been developed and optimized for many design nodes. However, it can also be useful to feed forward (Figure 3) the metrology data to one or more of the upcoming processing steps [2]. By adjusting the processing system to account for known variations of an upcoming lot, errors that could result in wafer scrap are reduced.

For example, patterned wafer geometry measurement systems can measure wafer shape after processes such as etch and CMP and the resulting data can be fed back to help improve these processes. But the resulting wafer shape data can also be fed forward to the scanner to improve patterning [3-5]. Likewise, reticle registration metrology data can be used to monitor the outgoing quality of reticles from the mask shop, but it can also be fed forward to the scanner to help reduce reticle-related sources of patterning errors. Utilizing an intelligent combination of feedforward and feedback control loops, in conjunction with fab-wide, comprehensive metrology measurements, can help fabs reduce variation and ultimately obtain better processing results, helping reduce rework and scrap.

Fig 3

Figure 3. Multiple data loops to help optimize fab-wide processes. Existing feedback loops (blue) have existed for several design nodes and detect and compensate for process variations. New, optimized feedback loops (green) provide earlier detection of process changes. Innovative feed forward loops (orange) utilize metrology systems to measure variations at the source, then feed that data forward to subsequent process steps.

Earlier excursion detection reduces waste

Fabs are also reducing process excursions by adding process control steps. Figure 4 shows two examples of deploying an inspection tool in a production fab. In the first case (left), inspection points are set such that a lot is inspected at the beginning and end of a module, with four process steps in between. If a process excursion that results in yield loss occurs immediately after the first inspection, the wafers will undergo multiple processing steps, and many lots will be mis-processed before the excursion is detected. In the second case (right), inspection points are set with just two process steps in between. The process excursion occurring after the first inspection point is detected two days sooner, resulting in much faster time-to-corrective action and significantly less yield loss and material wasted.

Furthermore, in Case 1, the process tools at four process steps must be taken off-line; in Case 2, only half as many process tools must be taken offline. This two-day delta in detection of a process excursion in a 100k WSPM fab with a 10 percent yield impact results in a savings of approximately 0.3 million kWh of power, 3.7K liters of water and 3500 kg of waste. While these environmental benefits were obtained by sampling more process steps, earlier excursion detection and improved environmental performance can also be obtained by sampling more sites on the wafer, sampling more wafers per lot, or sampling more lots. When a careful analysis of the risks and associated costs of yield loss is balanced with the costs of additional sampling, an optimal sampling strategy has been attained [6-7].

Figure 4. Adding an additional inspection point to the line will reduce the material at risk should an excursion occur after the first process step.

Figure 4. Adding an additional inspection point to the line will reduce the material at risk should an excursion occur after the first process step.

Conclusion

As semiconductor manufacturers focus more on their environmental performance, yield management serves as a critical tool to help reduce a fab’s environmental impact. Fabs can obtain several environmental benefits by implementing higher quality process control tools, combinations of feedback and feedforward control loops, optimal process control sampling, and faster cycles of learning. A comprehensive process control solution not only helps IC manufacturers improve yield, but also reduces scrap and rework, reducing the fab’s overall impact on the environment.

References

  1. Examples:
    1. https://newsroom.intel.com/news-releases/intels-arizona-campus-takes-the-leed/
    2. http://www.tsmc.com/english/csr/green_building.htm
    3. http://www.ti.com/corp/docs/manufacturing/RFABfactsheet.pdf
    4. http://www.globalfoundries.com/about/vision-mission-values/responsibility/environmental-sustainability-employee-health-and-safety
  1. Moyer, “Feed It Forward (And Back),” Electronic Engineering Journal, September 2014. http://www.eejournal.com/archives/articles/20140915-klat5d/
  2. Lee et al, “Improvement of Depth of Focus Control using Wafer Geometry,” Proc. of SPIE, Vol. 9424, 942428, 2015.
  3. Tran et al, “Process Induced Wafer Geometry Impact on Center and Edge Lithography Performance for Sub 2X nm Nodes,” 26th Annual SEMI Advanced Semiconductor Manufacturing Conference, 2015.
  4. Morgenfeld et al, “Monitoring process-induced focus errors using high resolution flatness metrology,” 26th Annual SEMI Advanced Semiconductor Manufacturing Conference, 2015.
  5. Process Watch: Sampling Matters,” Semiconductor Manufacturing and Design, September 2014.
  6. Process Watch: Fab Managers Don’t Like Surprises,” Solid State Technology, December 2014.
  7. Reducing Environmental Impact with Yield Management,” Chip Design, July 2012.

About the Authors:

Dr. David W. Price, Dr. Douglas Sutherland, and Ms. Kara L. Sherman are Senior Director, Principal Scientist, and Director, respectively, at KLA-Tencor Corp. Over the last 10 years, this team has worked directly with more than 50 semiconductor IC manufacturers to help them optimize their overall inspection strategy to achieve the lowest total cost. This series of articles attempts to summarize some of the universal lessons they have observed through these engagements

By Douglas G. Sutherland and David W. Price

Author’s Note: This is the sixth in a series of 10 installments that explore fundamental truths about process control—defect inspection and metrology—for the semiconductor industry. Each article in this series introduces one of the 10 fundamental truths and highlights their implications. Within this article we will use the term inspection to imply either defect inspection or a parametric measurement such as film thickness or critical dimension (CD).

In previous installments we discussed capability, sampling, missed excursions, risk management and variability. Although all of these topics involve an element of time, in this paper we will discuss the importance of timeliness in more detail.

The sixth fundamental truth of process control for the semiconductor IC industry is:

Time is the Enemy of Profitability

There are three main phases to semiconductor manufacturing: research and development (R&D), ramp, and high volume manufacturing (HVM). All of them are expensive and time is a critical element in all three phases.

From a cash-flow perspective, R&D is the most difficult phase: the fab is spending hundreds of thousands of dollars every day on man power and capital equipment with no revenue from the newly developed products to offset that expense. In the ramp phase the fab starts to generate some revenue early on, but the yield and volume are still too low to offset the production costs. Furthermore, this revenue doesn’t even begin to offset the cost of R&D. It is usually not until the early stages of HVM that the fab has sufficient wafer starts and sufficient yield to start recovering the costs of the first two phases and begin making a profit. Figure 1 below shows the cumulative cash flow for the entire process.

Figure 1. The cumulative cash-flow as a function of time. In the R&D phase the cash-flow is negative but the slope of the curve turns positive in the ramp phase as revenues begin to build. The total costs do not turn positive until the beginning of high-volume manufacturing.

Figure 1. The cumulative cash-flow as a function of time. In the R&D phase the cash-flow is negative but the slope of the curve turns positive in the ramp phase as revenues begin to build. The total costs do not turn positive until the beginning of high-volume manufacturing.

What makes all of this even more challenging is that all the while, the prices paid for these new devices are falling. The time required from initial design to when the first chips reach the market is a critical parameter in the fab’s profitability. Figure 2 shows the actual decay curve for the average selling price (ASP) of memory chips from inception to maturity.

Figure 2.  Typical price decline curve for memory products in the first year after product introduction.   Similar trends can be seen for other devices types.

Figure 2. Typical price decline curve for memory products in the first year after product introduction. Similar trends can be seen for other devices types.

Consequently, while the fab is bleeding money on R&D, their ability to recoup those expenses is dwindling as the ASP steadily declines. Anything that can shorten the R&D and ramp phases shortens the time-to-market and allows fabs to realize the higher ASP shown on the left hand side of Figure 2.

From Figures 1 and 2 it is clear that even small delays in completing the R&D or ramp phases can make the difference between a fab that is wildly profitable and one that struggles just to break even. Those organizations that are the first to bring the latest technology to market reap the majority of the reward. This gives them a huge head start—in terms of both time and money—in the development of the next technology node and the whole cycle then repeats itself.

Process control is like a window that allows you to see what is happening at various stages of the manufacturing cycle. Without this, the entire exercise from R&D to HVM would be like trying to build a watch while wearing a blindfold. This analogy is not as far-fetched as it may seem. The features of integrated circuits are far too small to be seen and even when inspections are made, they are usually only done on a small percentage of the total wafers produced. For parametric measurements (films, CD and overlay) measurements are performed only on an infinitesimal percentage of the total transistors on each of the selected wafers. For the vast majority of time, the fab manager truly is blind. Parametric measurements and defect inspection are brief moments when ‘the watch maker’ can take off the blindfold, see the fruits of their labor and make whatever corrections may be required.

As manufacturing processes become more complex with multiple patterning, pitch splitting and other advanced patterning techniques, the risk of not yielding in a timely fashion is higher than ever. Having more process control steps early in the R&D and ramp phases increases the number of windows through which you can see how the process is performing. Investing in the highest quality process control tools improves the quality of these windows. A window that distorts the view—an inspection tool with poor capture rate or a parametric tool with poor accuracy—may be worse than no window at all because it wastes time and may provide misleading data. An effective process control strategy, consisting of the right tools, the right recipes and the right sampling all at the right steps, can significantly reduce the R&D and ramp times.

On a per wafer basis, the amount of process control should be highest in the R&D phase when the yield is near zero and there are more problems to catch and correct. Resolving a single rate-limiting issue in this phase with two fewer cycles of learning—approximately one month—can pay for a significant portion of the total budget spent on process control.

After R&D, the ramp phase is the next most important stage requiring focused attention with very high sampling rates. It’s imperative that the yield be increased to profitable levels as quickly as possible and you can’t do this while blindfolded.

Finally, in the HVM phase an effective process control strategy minimizes risk by discovering yield limiting problems (excursions) in a timely manner.

It’s all about time, as time is money. 

References:

1)     Process Watch: You Can’t Fix What You Can’t Find, Solid State Technology, July 2014

2)     Process Watch: Sampling Matters, Semiconductor Manufacturing and Design, September 2014

3)     Process Watch: The Most Expensive Defect, Solid State Technology, December 2014

4)     Process Watch: Fab Managers Don’t Like Surprises, Solid State Technology, December 2014

5)     Process Watch: Know Your Enemy, Solid State Technology, March 2015 

About the authors:

Dr. David W. Price is a Senior Director at KLA-Tencor Corp. Dr. Douglas Sutherland is a Principal Scientist at KLA-Tencor Corp. Over the last 10 years, Dr. Price and Dr. Sutherland have worked directly with over 50 semiconductor IC manufacturers to help them optimize their overall inspection strategy to achieve the lowest total cost. This series of articles attempts to summarize some of the universal lessons they have observed through these engagements.

 

By Ron Press, Mentor Graphics

Three-dimensional (3D) ICs, chips assembled from multiple vertically stacked die, are coming. They offer better performance, reduced power, and improved yield. Yield is typically determined using silicon area as a key factor; the larger the die, the more likely it contains a fabrication defect. One way to improve yield, then, is to segment the large and potentially low-yielding die into multiple smaller die that are individually tested before being placed together in a 3D IC.

But 3D ICs require some modification to current test methodologies. Test for 3D ICs has two goals: improve the pre-packaged test quality, and establish new tests between the stacked die. The basic requirements of a test strategy for 3D ICs are the same as for traditional ICs—portability, flexibility, and thoroughness.  A test strategy that meets these goals is based on a plug-and-play architecture that allows die, stack, and partial stack level tests to use the same test interface, and to retarget die-level tests directly to the die within the 3D stack.

A plug-and-play approach that Mentor Graphics developed uses an IEEE 1149.1 (JTAG) compliant TAP as the interface at every die and IEEE P1687 (IJTAG) networks to define and control test access. The same TAP structure is used on all die, so that when doing wafer test on individual die, even packaged die, the test interface is through the same TAP without any modifications.

When multiple die are stacked in a 3D package, only the TAP on the bottom die is visible as the test interface to the outside world, in particular to the ATE. For test purposes, any die can be used as the bottom die. From outside of the 3D package, for board-level test for example, the 3D package appears to contain only the one TAP from the bottom die.

Each die also uses IJTAG to model the TAP, the test access network, and test instruments contained within the die. IJTAG provides a powerful means for the test strategy to adjust to and adopt future test features. It is based on and integrates the IEEE 1149.1 and IEEE 1500 standards, but expands beyond their individual possibilities.

Our test methodology achieves high-quality testing of individual die through techniques like programmable memory BIST and embedded compression ATPG with logic BIST. The ATPG infrastructure also allows for newer high-quality tests such as timing-aware and cell-aware.

For testing the die IO, the test interface is based on IEEE 1149.1 boundary scan. Bidirectional boundary scan cells are located at every IO to support a contactless test technique which includes an “IO wrap” and a contactless leakage test.  This use of boundary scan logic enables thorough die-level test, partially packaged device test, and interconnect test between packaged dies.

The test methodology for 3D ICs also opens the possibilities of broader adoption of hierarchical test. Traditionally, DFT insertion and pattern generation efforts occurred only after the device design was complete. Hierarchical DFT lets the majority of DFT insertion and ATPG efforts go into individual blocks or die. Patterns for BIST and ATPG are created for an individual die and then retargeted to the larger 3D package. As a result, very little work is necessary at the 3D package-level design. Also, the DFT logic and patterns for any die can be retargeted to any package in which the die is used. Thus, if the die were used in multiple packages then only one DFT insertion and ATPG effort would be necessary, which would then be retargeted to all the platforms where it is used.

Using a common TAP structure on all die and retargeting die patterns to the 3D package are capabilities that exist today. However, there is another important new test requirement for a 3D stack— the ability to test interconnects between stacked dies. I promote a strategy based on the boundary scan bidirectional cells at all logic die IO, including the TSVs. Boundary scan logic provides a standard mechanism to support die-to-die interconnect tests, along with wafer- and die-level contactless wrap and leakage tests.

To test between the logic die and Wide I/O external memory die, the Wide I/O JEDEC boundary scan register at the memory IO is used. The addition of a special JEDEC controller placed in the logic die and controlled by the TAP lets it interface to the memory. Consequently, a boundary scan-based interconnect test is supported between the logic die and external memory. For at-speed interconnect test, IJTAG patterns can be applied to hierarchical wrapper chains in the logic die, resulting in an at-speed test similar to what is used today for hierarchical test between cores.

Finally, for 3D IC test, you need test and diagnosis of the whole 3D package. Use the embedded DFT resources to maximizes commonalities across tests and facilitate pre- and post-stacking comparisons. To validate the assembled 3D IC, you must follow an ordered test suite that starts with the simplest tests first, as basic defects are more likely to occur than complex ones. It then progressively increases in complexity, assuming the previous tests passed.

Industry-wide, 3D test standards such as P1838, test requirements, and the types of external memories that are used are still in flux. This is one reason I emphasized plug-and-play architecture and flexibility. By structuring the test architecture on IJTAG and existing IJTAG tools, you can adapt and adjust the test in response to changing requirements. I believe that test methodologies that develop for testing 3D ICs will lead to an age of more efficient and effective DFT  overall.

Figure 1. The overall architecture of our 3D IC solution. A test is managed through a TAP structure on the bottom die that can enable the TAPs of the next die in the stack and so on. A JEDEC controller is used to support interconnect test of Wide I/O memory dies.

Figure 1. The overall architecture of our 3D IC solution. A test is managed through a TAP structure on the bottom die that can enable the TAPs of the next die in the stack and so on. A JEDEC controller is used to support interconnect test of Wide I/O memory dies.

More from Mentor Graphics:

Model-based hints: GPS for LFD success

Reducing mask write-time – which strategy is best?

The advantage of a hybrid scan test solution

Ron Press is the technical marketing director of the Silicon Test Solutions products at Mentor Graphics. He has published dozens of papers in the field of test, is a member of the International Test Conference (ITC) Steering Committee, and is a Golden Core member of the IEEE Computer Society, and a Senior Member of IEEE. Press has patents on reduced-pin-count testing and glitch-free clock switching.

Long live FinFET


February 3, 2014

By Zhihong Liu, Executive Chairman, ProPlus Design Solutions, Inc., San Jose, Calif.

 

FinFET technology, with its multi-gate architecture for superior scalability, is gaining momentum with foundries, EDA vendors and fabless design companies, a welcome trend that began in 2013 and will continue into 2014.

Enormous effort has been expended already by leading manufactures such as GLOBALFOUNDRIES, Intel, Samsung and TSMC and their EDA partners to support the new technology node that offers so much promise. The move to FinFET portends good things for the semiconductor industry as it enables continuous Moore’s Law scaling down to sub-10nm and delivers higher performance and lower power consumptions. The revolutionary device architecture also brings challenges to designers and EDA companies developing FinFET design tools and methodologies. The achievement of FinFET solution readiness across the design flow is a significant accomplishment, especially considering the PDK itself was migrating in parallel from v0.1 and v0.5 toward v1.0.

The industry-standard BSIM-CMG model, developed by the BSIM group at the University of California at Berkeley, uses complicated surface-potential based equations to model FinFET devices, which also require complex parasitic resistance and capacitance models. As a result, SPICE simulation performance is known to be a few times slower than bulk technology with BSIM4 models. In addition, netlist sizes for FinFET designs are large, especially for post-layout extracted simulations, the norm given the impact of process variations, including layout effects on a design. Lower Vdd, increased parasitic capacitance coupling and noise sensitivity create a need for high accuracy circuit SPICE simulation where convergence of currents and charges is carefully controlled. These issues significantly impact the type of circuit simulation solution that will be viable for FinFETs.

FinFET poses many other design challenges that both EDA vendors and designers have to respond to. For example, “width quantization” puts new requirements on analog and standard cell designers. They can only use quantized widths instead of arbitrary width values in their designs.

The FinFET harvest is just beginning. As production tapeout activity ramps up, more emphasis will be placed on improving the performance of design flows, such as accelerating simulation and better sampling methods for corners or high sigma Monte Carlo analysis. Parametric yield will continue to be a key requirement as design houses attempt to maximize ROI from an existing node or to maximize the investment into a new node. The days of “margining” to safeguard a design are over. At the newer nodes, designers will invest more time figuring out where the yield cliff actually is and making sure their design is robust and will yield in production.

As a result, designers will have to seek out new tools and methodologies to overcome FinFET design challenges. One example is the adoption of giga-scale parallel SPICE simulators to harness circuit simulation challenges in FinFET designs. Traditional SPICE simulators don’t have the capacity and lack sufficient performance to support FinFET designs, while FastSPICE simulators likely will not meet accuracy requirements. Another example is where FinFETs have created increased interest in high sigma analysis of library designs such as SRAMs, standard cells and I/Os. Designers are working hard to fulfill a foundry requirement to verify bitcell designs to 7 sigma. That requirement can only be achieved by proven variation analysis tools that can support large capacity and high sigma yield analysis out to high sigma values.

Yes, FinFET could be the technology to give the semiconductor and EDA industry a major boost. I say long live FinFET.

Read more from ProPlus Design Solutions’ Blog:

Memory design challenges require giga-scale SPICE simulation

DAC panels tackle giga-scale design challenges, semiconductor market in China

SPICEing up circuit design

SEMI ISS: Scaling innovation


February 3, 2014

By Ira Feldman, Feldman Engineering Corp.

Don’t pop the champagne just yet! Although plenty of good news was shared at the 2014 SEMI Industry Strategy Symposium (ISS) there was the sobering outlook of possible limited long-term growth due to technology issues as well as economic projections. Noticeable was the lack of news and updates on key industry developments.

This is the yearly “data rich” or “data overload” (take your pick) conference of semiconductor supply chain executives. The majority of the attendees and presenters are from the SEMI member companies that develop the equipment, materials, processes, and technology used to build, test, and package semiconductors. Keeping the pressure on for advanced technology were the “end customer” attendees and presenters – semiconductor manufacturers.

The official theme was “Pervasive Computing – An Enabler for Future Growth” and the presentations made it clear that pervasive computing will greatly increase the demand for semiconductors. However, as discussed in context of very high volume applications such as the Internet of Things and the recent TSensors Summit, such explosive growth will only occur at a sufficiently low price point for these semiconductors and micro-electromechanical systems (MEMS) based sensors.

A major focus of ISS is economics: both the global trends that drive the semiconductor industry and the cost of new technology. Unlike past years, much of the discussion about new technology centered on economics rather than “will it work.” Past discussions about 450mm wafers, extreme ultra-violet (EUV) photolithography, and 3D transistor structures focused on the soundness of these technologies, not the economics.

The question that is being asked of technologists with increasing frequency is: “When will Moore’s Law end?” The cost reduction necessary to keep pace with Moore’s prediction that the minimum cost per transistor will be achieved when the number of transistors on a semiconductor device doubles every two years has primarily been achieved through shrinking the size of the transistors (“scaling”). Many smart people predicted the end of transistor scaling was upon us, hence the demise of Moore’s Law, only to be proven wrong as scaling continued.

Numerous speakers including Jon Casey (IBM) and Mike Mayberry (Intel) stated that scaling will continue below the 10 nm process node perhaps to 5 or 7 nm. However, the question raised by both the speakers and the audience was at what cost will this scaling be achieved. Rick Wallace (KLA-Tencor) reminded us that the demise of the Concorde supersonic plane was the economics and not the technology. In drawing a parallel to the challenge of continued scaling, Mr. Wallace said, “Moore’s Law is more likely to be killed in the board room than in the laboratory.” Therefore, we really need to look to the product managers and executives as well as the technologists for the answer.

The development on 450mm silicon wafers continues via the G450C consortium and Paul Farrar (G450C) provided a progress update. Their current estimates show 450mm ready for production in the range of late 2017 to mid-2020. Meanwhile Bob Johnson (Gartner) showed projected mid-2018 intercepts for Intel and Taiwan Semiconductor Manufacturing Company (TSMC) capability and first true production fabs in 2019-2020. Many of the equipment companies expressed concerns about their return on investment (ROI) for developing 450 mm equipment especially with a limited market. The weakness of demand can be summed up by Manish Bhatia (SanDisk) who said SanDisk/Toshiba didn’t want to build the last 300 mm fab nor were they in the running to build the first 450 mm fab. It appears as though many customers and suppliers share a “wait and see” attitude even though there are still many years of hard work required to launch 450mm.

No formal update on extreme ultra-violet (EUV) photolithography was presented this year although concerns about throughput and cost were mentioned by several speakers. These concerns are part of the fundamental economics of scaling which will require EUV and/or multi-patterning (multiple passes through the photolithography patterning modules for each layer of the semiconductor device instead of the single pass typical of older process nodes) to achieve smaller dimensions. ASML’s last presentation to ISS was in 2012 shortly before they became the “sole” developer of EUV so I hope there will be a public update later this year. For a while, EUV appeared to be a prerequisite for 450 mm development based upon process node intercept but the G450C plan of record (POR) is 193 nm immersion photolithography. G450C will start “investigating” EUV in the second half of 2016. Is this another code for “wait and see”?

Ivo_Bolsens_Xilinx_SEMI_ISS_2014

Ivo Bolsens (Xilinx) reviewed the challenges and costs in developing next generation application specific integrated circuits (ASICs) and application specific standard product (ASSP). In particular he shared the staggering increase in the cost of the non-recurring engineering (NRE) to develop leading edge semiconductors. (Shown in the chart above.) In less than three years, the estimated NRE cost has jumped from $85 M for a 45 nm design to over $170 M for 28 nm. Included in these NRE estimates are the cost of the design work, masks, embedded software (IP licenses), and yield ramp-up cost. The data presented shows an exponential growth for NRE for each new process node. Rough extrapolation would place 14 nm at $340 M and 7 nm at $680 M respectively. Good news, scaling will continue. Bad news, products may not be able to afford it.

With all this dark and murky news about the future, what was the good news from SEMI ISS? Innovation. The undercurrent of almost every presentation was: since we cannot guarantee that future scaling will provide the savings needed, we need to look at alternative materials, device structures, computation models, system architectures, etc. to continue on the expected cost reduction slope. The list includes a wide range of technology from “More than Moore” (system in packaging, 2.5D, 3D packaging, etc.) to 3D FinFET transistors to carbon nanotubes (CNT) to optoelectronic interconnects, and beyond.

Mr. Wallace in his opening keynote discussed the prerequisites for innovation and shared his concern that some companies have become “too big to innovate”. Even more importantly, if the semiconductor industry wants to remain relevant and attract the best young talent we need to be the “magic behind the gadget.” The Tuesday afternoon sessions closed out with Mark Randall (Adobe Systems) who described his efforts to drive grass-roots innovation by empowering any employee to innovate with no strings attached. Young Sohn (Samsung Electronics) provided his keynote “Innovation in a Connected World” at the banquet describing their work to move from communication devices (smartphones, tablets, etc.) to something that does more to improve lives.

Yes, “innovation” has become an industry buzzword that is often overused. Having seen where these companies say they need to go it is clear many understand it is time to innovate or die. They realize that profitable scaling won’t last forever. Difficult strategic decisions need to be made – marketers and engineers cannot / will not change the direction of their companies by themselves. Enabling innovation and making bold strategic changes requires executive leadership.

Meanwhile, consumers will expect the continuation of Moore’s Law – or at least the end result of continually lowered cost and/or higher performance – without giving a thought to the industry’s inability to continue cost effective scaling or other technical mumbo-jumbo. We still need to continue to make the magic happen!

As always, I look forward to hearing your comments either below or directly. Please don’t hesitate to contact me to discuss your thoughts. For more my thoughts, please see hightechbizdev.com

Ira Feldman ([email protected]) is the Principal Consultant of Feldman Engineering Corp. which guides high technology products and services from concept to commercialization. He follows many “small technologies” from semiconductors to MEMS to nanotechnology engaging on a wide range of projects including product generation, marketing, and business development.

By Zvi Or-Bach, President and CEO of MonolithIC

A recent SEMI report titled SEMI Reports Shift in Semiconductor Capacity and Equipment Spending Trends reveals an important new trend in semiconductors: “spending trends for the semiconductor industry have changed. Before 2009, capacity expansion corresponded closely to fab equipment spending.  Now more money is spent on upgrading existing facilities, while new capacity additions are occurring at a lower pace, to levels previously seen only during an economic or industry-wide slowdown”.

Looking at the semi-equipment booking should be the first step in any attempt to assess future semiconductor trends. While talking is easy, spending billions of dollars is not. Vendors look deeply into their new design bookings and their future production needs before committing new dollars to long lead purchases for their manufacturing future needs. In the past decade it was relatively simple, as soon as a new process node reached production maturity vendors would place new equipment orders knowing that soon enough all new designs and their volume will shift to the new process node. But the SEMI report seems to tell us that we are facing a new reality in the semiconductor industry – a Paradigm Shift.

A while ago VLSI Research Inc. released the following chart with the question: Is Moore’s Law slowing down?

Zvi1Jan27

The chart above indicates a coming change in the industry dynamic, and 2013 might be the year that this turns out to be a Paradigm Shift.

 Just few weeks ago at the SEMI ISS conference, Handel Jones of IBS presented many very illuminating charts and forecasts. The following chart might be the most important of them and it is the revised calculation of per gate cost with scaling.

 Zvi2Jan27

Clearly the chart reveals an unmistakable Paradigm Shift as 28nm is the last node for which dimensional scaling provides a per gate cost reduction. It makes prefect sense for the vendors and their leading edge customers to respond accordingly. Hence it easy to understand why more equipment is being bought to support 28nm and older nodes.

The following table, also from Jones, illustrates this new reality.

 Zvi3Jan27

In the equipment business, more than 50% of demand comes from the memory segment where the dollars per sold wafer are much lower than in logic. It seems that the shift there has already taken place. Quoting Randhir Thakur, Executive Vice President, General Manager, Silicon Systems Group, Applied Materials, Inc -as was recently published in The shift to materials-enabled 3D: ” our foundry/logic and memory customers that manufacture semiconductors are migrating from lithography-enabled 2D transistors and 2D NAND to materials-enabled 3D transistors and 3D NAND”…”Another exciting inflection in 2014 is our memory customers’ transition from planar two-dimensional NAND to vertical three-dimensional NAND. 3D technology holds the promise of terabit-era capacity and lower costs by enabling denser device packing, the most fundamental requirement for memory”. Which fits nicely with the following illustration made by Samsung as part of their 3D-NAND marketing campaign.

Zvi4Jan27

As for the logic segment, the 3D option is yet to happen. But as we wrote in respect to 2013 IEDM – More Momentum Builds for Monolithic 3D ICs.

The following chart from CEA Leti illustrates the interest for monolithic 3D-

 Zvi5Jan27

 It should be noted that monolithic 3D technology for logic is far behind in comparison to memory. Given that the current issues with dimensional scaling are clearly only going to get worse, we should hope that an acceleration of the effort for logic monolithic 3D will take place soon. In his invited paper at IEDM 2013, Geoffrey Yeap, VP of Technology at Qualcomm, articulates why monolithic 3D is the most effective path for the semiconductor future: “Monolithic 3D (M3D) is an emerging integration technology poised to reduce the gap significantly between transistors and interconnect delays to extend the semiconductor roadmap way beyond the 2D scaling trajectory predicted by Moore’s Law.” As illustrated by his Fig. 17 below.

 Zvi6Jan27

So, in conclusion, our industry is now going thru a paradigm shift. Monolithic 3D is shaping up as one of the technologies that would revive and sustain the historically enjoyed growth and improvements well into the future. The 2014 S3S conference scheduled for October 6-9, 2014 at the Westin San Francisco Airport will provide both educational opportunities and cutting edge research in monolithic 3D and other supporting domains. Please mark your calendar for this opportunity to contribute and learn about the new and exciting monolithic 3D technology.