Category Archives: Metrology

There are many different situations in which special attention to color choices provide the potential to improve the manufacturing results of multi-patterned masks.

BY DAVID ABERCROMBIE and ALEX PEARSON, Mentor Graphics, Wilsonville, OR

Multi-patterning design rules don’t care about color (mask assignments). As long as all the spacing and alternation constraints are met, any coloring arrangement is legal. In the beginning of multi-patterning, all possible color combinations that passed the design rule checks (DRC) were considered and treated as equal. As the technology moves into more advanced nodes, however, that is no longer the case.

As it turns out, one legal coloring choice can, in fact, be significantly better than another when it comes to manufacturing success and chip performance. Designers working on multi-patterned layouts need to understand the issues and conditions that affect their color choices, so they can determine the optimal coloring scheme for their designs.

Color density

In multi-patterned designs, each color assignment represents a different manufacturing mask. Each mask is processed through a lithography operation, and the pattern is etched onto the wafer. Once all the masks are processed, the goal is to have all the shapes created from all the masks act as if they were all generated from one mask, with very similar process biases and variations.
To ensure that type of consistency, all the masks need to resemble each other in terms of the total area and distribution of shapes. Clumping shapes in one area of one mask, while distributing shapes evenly across another, is going to result in very different process bias behavior and results. Balancing the color density across each mask provides the best manufacturing result.

To explain why, let’s look at a standard cell library design. Because power rails are typically much wider than the routing tracks inside the cells, they constitute a large portion of the polygon area inside the standard cell design block. The number of tracks in the library force the power rails into certain color pairings (FIGURE 1). In the first case, the power rails are forced to opposite colors, while in the second, they are forced to the same color.

Screen Shot 2018-03-28 at 7.41.28 AM

The color ratio distribution charts tell the story of the two designs. When the power rails alternate color, the distribution of the color density ratio is well-centered around the 50% point. However, forcing the power rails to be a single color can dramatically shift the color ratio towards that single color. This distribution is more problematic to manufacture.

But uniform color density isn’t just a chip-wide, global issue—even local differences can have negative impacts, because local areas with excessive or insufficient color density can impact the biases of nearby shapes during processing. In FIGURE 2, both coloring options are legal, but the polygons within each connected component are not equal in area, so the choice of G-B-G-B vs. B-G-B-G affects how much area of each color ultimately exists within this local region. The second coloring choice results in a more uniform area density of each color.

Screen Shot 2018-03-28 at 7.41.35 AM

However, some layouts contain polygon configurations that inherently make it almost impossible to balance colors simply by changing color choices. For example, sometimes you have a very large area polygon in the midst of your layout (FIGURE 3). No matter what color you assign to the large polygon, it will dominate the color density in this region. Changing color selections in the nearby polygons doesn’t help, because they can’t all be assigned to the other color.

Screen Shot 2018-03-28 at 7.41.42 AM

In this case, a new (and perhaps unexpected) solution is needed. Placing evenly distributed polygons of the opposite color in a grid on top of the large area polygon (known as reverse tone overlay fill) adds shapes to the opposite color mask in a region that would otherwise have been empty (FIGURE 4). The smaller polygons on top don’t create openings (they merely “double” block the etch), so they have no real purpose in terms of the final wafer shape. In that regard, they are similar to dummy fill. This technique ensures the two masks have more similar color densities in this region.

Screen Shot 2018-03-28 at 7.41.49 AM

Color regularity

Specific configurations, such as those found in memory applications, may also need strongly controlled, repetitive coloring patterns to help the optical proximity correction (OPC) process generate more consistent results. FIGURE 5 shows three vertical instantiations of a repetitive pattern with horizontal color alternation constraints. On the left, a density-balanced legal coloring assignment is shown. However, by adding a few extra coloring constraints, you can also achieve a regular repetitive coloring pattern, as shown on the right. By introducing this color regularity, you can increase the chances of consistency in the post-OPC results.

Screen Shot 2018-03-28 at 7.41.58 AM

Layout symmetry is another aspect of design that benefits from color regularity. When there is a significant amount of symmetry around a central point, such as a sensitive analog circuit, the most desirable coloring solution maintains x and y axis symmetry around the central point. In FIGURE 6, the constrained coloring solution on the right adds constraints for x and y axis symmetry to generate a mirrored coloring pattern.

Screen Shot 2018-03-28 at 7.42.12 AM

DFM-aware coloring

In design for manufacturing (DFM) optimization, weak lithographic configurations are often captured as process hotspot patterns, which can be used with DFM and/or resolution enhancement technology (RET) processes to minimize the chance of a hotspot forming during manufacturing. As it turns out, the coloring of these patterns in multi-patterned designs can influence whether or not a pattern becomes a hotspot, or actually change the hotspot severity or impact of a particular pattern. If a hotspot pattern is consistently colored in all its instantiations, it may prevent that hotspot from forming, or allow a carefully tuned OPC recipe to be applied.

In FIGURE 7, a different, but still legal, coloring is applied to a rotated/reflected pattern. Because the OPC process will now affect each instance differently, the rotated pattern may become a lithographic hotspot, while the original pattern does not.

Screen Shot 2018-03-28 at 7.42.04 AM

FIGURE 8 shows the same legal coloring applied to both pattern instances, which allows the same OPC to be applied to the layout in both locations, because the coloring is the same, and the polygons that end up on each mask are consistent.

Screen Shot 2018-03-28 at 7.42.21 AM

Sometimes there are cases where information from other layers indicate a color preference for certain shapes. These preferences are typically the result of analysis on another layer, or from information the designer provides, such as for critical or high voltage nets. While these preferences may sometimes conflict with each other for neighboring shapes in the same component, applying these preferences whenever possible helps drive an optimal coloring solution. In FIGURE 9, the red markers indicate a preference for placing those shapes on the green mask. In this case, there is one component that cannot comply, but placing three of the four tagged polygons on the preferred mask maximizes the preferred placements, making this optimal coloring solution.

Screen Shot 2018-03-28 at 7.42.26 AM

Conclusion

In advanced process nodes, achieving the best performance and yield requires moving beyond the minimum requirements of the design rules to optimizing the layout. This optimization is a fundamental principle of all design for manufacturing (DFM) activities, including multi-patterning decomposition. There are many different situations in which special attention to color choices provide the potential to improve the manufacturing results of multi-patterned masks. Designers involved with generating the decomposed mask data before tapeout can expect to see more emphasis on color optimizations as the industry continues to refine and enhance multi-patterning processes.

Data is only as good as humans’ ability to analyze and make use of it.

In materials research, the ability to analyze massive amounts of data–often generated at the nanoscale–in order to compare materials’ properties is key to discovery and to achieving industrial use. Jeffrey M. Rickman, a professor of materials science and physics at Lehigh University, likens this process to candy manufacturing:

“If you are looking to create a candy that has, say, the ideal level of sweetness, you have to be able to compare different potential ingredients and their impact on sweetness in order to make the ideal final candy,” says Rickman.

For several decades, nanomaterials–matter that is so small it is measured in nanometers (one nanometer = one-billionth of a meter) and can be manipulated at the atomic scale–have outperformed conventional materials in strength, conductivity and other key attributes. One obstacle to scaling up production is the fact that scientists lack the tools to fully make use of data–often in the terabytes, or trillions of bytes–to help them characterize the materials–a necessary step toward achieving “the ideal final candy.”

What if such data could be easily accessed and manipulated by scientists in order to find real-time answers to research questions?

The promise of materials like DNA-wrapped single-walled carbon nanotubes could be realized. Carbon nanotubes are a tube-shaped material which can measure as small as one-billionth of a meter, or about 10,000 times smaller than a human hair. This material could revolutionize drug delivery and medical sensing with its unique ability to penetrate living cells.

A new paper takes a step toward realizing the promise of such materials. Authored by Rickman, the article describes a new way to map material properties relationships that are highly multidimensional in nature. Rickman employs methods of data analytics in combination with a visualization strategy called parallel coordinates to better represent multidimensional materials data and to extract useful relationships among properties. The article, “Data analytics and parallel-coordinate materials property charts,” has been published in npj Computational Materials, a Nature Research journal.

“In the paper,” says Rickman, “we illustrate the utility of this approach by providing a quantitative way to compare metallic and ceramic properties–though the approach could be applied to any materials you want to compare.”

It is the first paper to come out of Lehigh’s Nano/Human Interface Presidential Engineering Research Initiative, a multidisciplinary research initiative that proposes to develop a human-machine interface to improve the ability of scientists to visualize and interpret the vast amounts of data that are generated by scientific research. It was kickstarted by a $3-million institutional investment announced last year.

The leader of the initiative is Martin P. Harmer, professor of materials science and engineering. In addition to Rickman, other senior faculty members include Anand Jagota, department chair of bioengineering; Daniel P. Lopresti, department chair of computer science and engineering and director of Lehigh’s Data X Initiative; and Catherine M. Arrington, associate professor of psychology.

“Several research universities are making major investments in big data,” says Rickman. “Our initiative brings in a relatively new aspect: the human element.”

According to Arrington, the Nano/Human Interface initiative emphasizes the human because the successful development of new tools for data visualization and manipulation must necessarily include a consideration of the cognitive strengths and limitations of the scientist.

“The behavioral and cognitive science aspects of the Nano/Human Interface initiative are twofold,” says Arrington. “First, a human-factors research model allows for analysis of the current work environment and clear recommendations to the team for the development of new tools for scientific inquiry. Second, a cognitive psychology approach is needed to conduct basic science research on the mental representations and operations that may be uniquely challenged in the investigation of nanomaterials.”

Rickman’s proposed method uses parallel coordinates, which is a method of visualizing data that makes it possible to spot outliers or patterns based on related metric factors. Parallel coordinates charts can help tease out those patterns.

The challenge, says Rickman, lies in interpreting what you see.

“If plotting points in two dimensions using X and Y axes, you might see clusters of points and that would tell you something or provide a clue that the materials might share some attributes,” he explains. “But, what if the clusters are in 100 dimensions?”

According to Rickman, there are tools that can help cut down on numbers of dimensions and eliminate non-relevant dimensions to help one better identify these patterns. In this work, he applies such tools to materials with success.

“The different dimensions or axes describe different aspects of the materials, such as compressibility and melting point,” he says.

The charts described in the paper simplify the description of high-dimensional geometry, enable dimensional reduction and the identification of significant property correlations and underline distinctions among different materials classes.

From the paper: “In this work, we illustrated the utility of combining the methods of data analytics with a parallel coordinates representation to construct and interpret multidimensional materials property charts. This construction, along with associated materials analytics, permits the identification of important property correlations, quantifies the role of property clustering, highlights the efficacy of dimensional reduction strategies, provides a framework for the visualization of materials class envelopes and facilitates materials selection by displaying multidimensional property constraints. Given these capabilities, this approach constitutes a powerful tool for exploring complex property interrelationships that can guide materials selection.”

Returning to the candy manufacturing metaphor, Rickman says: “We are looking for the best methods of putting the candies together to make what we want and this method may be one way of doing that.”

New frontier, new approaches

Creating a roadmap to finding the best methods is the aim of a 2½-day, international workshop called “Workshop on the Convergence of Materials Research and Multi-Sensory Data Science” that is being hosted by Lehigh University in partnership with The Ohio State University.

The workshop–which will take place at Bear Creek Mountain Resort in Macungie, PA from June 11-13, 2018–will bring together scientists from allied disciplines in the basic and social sciences and engineering to address many issues involved in multi-sensory data science as applied to problems in materials research.

“We hope that one outcome of the workshop will be the forging of ongoing partnerships to help develop a roadmap to establishing a common language and framework for continued dialogue to move this effort of promoting multi-sensory data science forward,” says Rickman, who is Principal Investigator on an National Science Foundation (NSF) grant, awarded by the Division of the Materials Research in support of the workshop.

Co-Principal Investigator, Nancy Carlisle, assistant professor in Lehigh’s Department of Psychology, says the conference will bring together complementary areas of expertise to allow for new perspectives and ways forward.

“When humans are processing data, it’s important to recognize limitations in the humans as well as the data,” says Carlisle. “Gathering information from cognitive science can help refine the ways that we present data to humans and help them form better representations of the information contained in the data. Cognitive scientists are trained to understand the limits of human mental processing- it’s what we do! Taking into account these limitations when devising new ways to present data is critical to success.”

Adds Rickman: “We are at a new frontier in materials research, which calls for new approaches and partners to chart the way forward.”

The Semiconductor Industry Association (SIA) today released the following statement from President & CEO John Neuffer in response to the Section 301 action taken by the Trump Administration to address China’s trade practices.

“The U.S. semiconductor industry shares the Trump Administration’s concerns regarding unfair and discriminatory trade practices that put at risk American intellectual property in China.

“We are reviewing the Administration’s Section 301 findings and proposed actions, and encourage an outcome that protects U.S. intellectual property in a manner that avoids a costly trade conflict. We welcome the opportunity to provide input on proposed tariffs, and hope to work with the Administration to avoid tariffs that would harm competitive U.S. industries and their consumers.

“Intellectual property is the lifeblood of the semiconductor industry. Semiconductors are America’s fourth-largest export and are fundamental to the strength of our economy. U.S. semiconductor companies invest nearly one-fifth of their revenue in research and development to stay at the forefront of innovation. They should be able to compete in foreign markets without putting their critical IP at risk.

“At the same time, we welcome China’s participation in the global semiconductor value chain as long as it conforms with its international obligations and is consistent with market-based principles. In the end, strong protections for intellectual property serve the long-term interests of both the United States and China.”

North America-based manufacturers of semiconductor equipment posted $2.41 billion in billings worldwide in February 2018 (three-month average basis), according to the February Equipment Market Data Subscription (EMDS) Billings Report published today by SEMI.  The billings figure is 1.7 percent higher than the final January 2018 level of $2.37 billion, and is 22.2 percent higher than the February 2017 billings level of $1.97 billion.

“February billings remain at a level indicating another positive year for semiconductor equipment spending,” said Ajit Manocha, president and CEO of SEMI. “We expect 2018 to mark the fourth consecutive year of spending growth, which last occurred in the 1990s.”

The SEMI Billings report uses three-month moving averages of worldwide billings for North American-based semiconductor equipment manufacturers. Billings figures are in millions of U.S. dollars.

Billings
(3-mo. avg)
Year-Over-Year
September 2017
$2,054.8
37.6%
October 2017
$2,019.3
23.9%
November 2017
$2,052.3
27.2%
December 2017
$2,398.4
28.3%
January 2018 (final)
$2,370.1
27.5%
February 2018 (prelim)
$2,411.4
22.2%

Source: SEMI (www.semi.org), March 2018

This work explores the effect of underlying metallic alloys and the influence of Cu loss under via bottom after dry etching and wet cleaning processes. To Improve the Cu loss under via bottom, effective approaches are proposed. The modified actions for via bottom improve not only wafer yield but also reliability of the device.

By CHENG-HAN LEE and REN-KAE SHIUE, Department of Materials Science and Engineering, National Taiwan University, Taiwan, ROC

With metal line dimensional shrinkage in advanced packaging, Cu voids in metal lines cause the failure of via-induced metal-island corrosion. It impacts not only yield loss but also device reliability, specifically electron migration (EM) and stress migration (SM). One of the Cu voids is located under via bottom which is more unpredictable than others. The Cu void under via bottom is caused by integrated processes such as via etch and Cu electro-chemical plating (ECP). It is not similar to the Cu void caused by barrier Cu-seed and ECP Cu. The mechanism of Cu voids under via bottom formation from dry etching and wet cleaning are related to Cu dual-damascene interconnection. Both plasma damage and chemical reaction are proposed to explain its failure mechanism. In the integrated process of Cu interconnects, we can design not only the safety dimension of Cu line via depth but also process criteria with less damage and oxidation in dry etching and wet clean based on Cu loss amount (Cu recess) in TEM inspection. The modified actions for via bottom improve not only wafer yield but also reliability of device.

Introduction

For deep sub-micrometer CMOS integrated circuit, copper (Cu) metallization has been applied in semi- conductor metallization processes of ULSI beyond 0.13 μm technology because of its lower resistivity and better reliability, especially better electron migration resistance than that of aluminum (Al) [1–4]. Under 10 nm technology, front end-of-line (FEOL) device process had already transferred from planar to fin-fet MOS, but the Cu formation process only have slight change in backend-of-line (BEOL) metallization. There are two kinds of schemes, single- and dual- damascene processes. In fact, the main body of Cu interconnection in dual- damascene process includes metal trench and via etching, post etching, wet clean, deposition of barrier films and Cu-seed layer, Cu ECP and Cu chemical mechanical polishing (CMP). They are all similar technologies.

Even though many well-known modifications were implemented in both mature and advanced processes, a few lethal defects which significantly damage wafer yield and device reliability, such as Cu voids and scratches, always exist after Cu-CMP process due to the Cu metal corrosion. Most previous studies in Cu voids, such as Lu et al. [5], Song et al. [6], Wrschka et al. [7] and T.C. Wang et al. [8], were focused on Cu voids on metal line due to wafer yield concern. It meant that Cu voids on metal line could be detected by on-line electron-beam inspection as demonstrated by Guldi et al. [9].

Although Reid et al. [10] have described that the formation of Cu voids could be resulted from step coverage of Cu-seed, waveform function and additives (Accelerator, Suppressor and Leveler), chemical formulation of ECP. However, the mechanism of Cu voids during the via-formation process is still unclear. Coverage or quality of seed layers being poor, thin and/or discontinuous will induce via bottom void which results in deteriorating the plating process. A systematic study of Cu void effects has not been reported. For the mature technology to reduce via resistance, a Cu surface cleaning (pre-cleaning) process prior to deposit the diffusion barrier metal to remove the CuOx on via bottom in order to improve yield was mentioned by Wang et al. [8]. However, it caused a significant Cu loss under via bottom as well as deteriorating reliability window of the process.

With the metal line shrinkage in advanced CMOS process, Cu void under via bottom becomes much crucial than before. Actually, it perhaps is the most important defect in device reliability concern. Unlike Cu voids or pits on metal line, such defects cannot be easily detected by on-line defect screen methodology, neither electrical test nor wafer yield testing. The reason is that Cu interconnection is still valid at that time. The most decisive step of Cu void detection under via bottom is the reliability test. Alers et al. [11] showed that Cu voids affected electron migration resistance. Wang et al. [12] had pointed out that Cu voids under via bottom were the major factor resulting in failure during stress and/or electron migration tests. In our exper- iment, Cu loss under via bottom was strongly related to high temperature storage (HTS) and high temperature operation life (HTOL) reliability tests. Thermal and/or electronic stresses may resulted from many processes, including Si manufacturing, bumping, wafer yield test and even early failure rate (EFR) stage in reliability test. It should be further clarified.

Experimental procedures

A. Cu scheme and process

A via structure consisted of metal chains and via holes as displayed in FIGURE 1. Dual Cu damascene with “via first” process was applied to prepare the test sample. The Cu interconnection was made by BEOL Cu dual-damascene process which included an etching stop layer, dielectric deposition, metal line/via lithography, metal line/via dry etching, post etching wet clean containing deionized water (DIW) with discharging gas, deposition of barrier films and Cu-seed layer, Cu ECP and Cu CMP.

Screen Shot 2018-03-22 at 1.10.02 PM

In advanced technology, EM resistance decreasing with metal line shrinkage of Cu interconnects was a major concern, specifically for dimensions of metal line and via bottom less than 30 nm. As the interconnect dimension shrunk, the EM resistance of Cu interconnects was deteriorated and decreasing the service life of device. In order to improve EM resistance of Cu damascene, doping the Cu interconnects with appropriate elements was one of engineering approaches. Manganese (Mn) is one of the most popular element applied in Cu dopping. Mn could diffuse through the Cu interconnects and segregate along the interface between Cu and low-k dielectric layer. It was served as the barrier layer, adhesion promoter and oxidation retardant because the diffusivity of Mn in Cu was much faster than self-diffusivity of Cu, approximately one order of magnitude higher. It indicated that Mn atoms initially alloyed in Cu were migrated into surface and interface, and formed an oxide layer leaving the pure Cu behind after annealing step. In addition, Mn could also repair discontinuous barrier layer (Ta/TaN) by forming a local manganese silicate diffusion barrier layer. It was so called self-forming Cu-Mn diffusion barriers [13,14].

In this research, both Cu/1% Mn and Cu/1% Al served as underlying alloys were evaluated by Cu recess. The introduction of Cu/1% Al in the test was for the purpose of comparison. The main body of Cu interconnection of dual-damascene process included via etching, post etching wet clean, deposition of barrier films and Cu-seed layer and ECP. They were separated by different key process variables, such as dry etching power split, post etching as well as wet clean discharging gas flow rate split. The effect of these process variables on Cu loss under via bottom was evaluated in the experiment.

B. Methodology

FIGURE 2 illustrated a schematic diagram of Cu recess in the device. The Cu recess of via bottom was observed using the step-by-step TEM followed by dry etch and wet clean processes. The Cu line was receded back into the bottom of Cu metal after the process. The Cu recess data were helpful to define which stage played the crucial role in Cu loss of via bottom. Electrical and wafer yield tests were applied in order to locate any abnormality after all processes were completed.

Screen Shot 2018-03-22 at 1.10.08 PM

To unveil the effects of thermal/electronic stresses on Cu voids under via bottom, HTS (175oC) and HTOL (175oC with double device operation voltages) were performed to evaluate wafer yield swap after HTS and HTOL. Wafer yield swap was able to exam the yield before/after HTS and HTOL. The good die was failed if the Cu loss under via bottom occurred. After wafer yield swap dice was confirmed, failure analysis was performed by focus ion beam (FIB), scanning electron microscope (SEM) and transmission electron microscope (TEM). In addition, the chemical analysis was examined using energy dispersive spectroscope (EDS).

Results and discussion

A special design of metal line via structure with high aspect ratio of approximately 5 was performed in order to deteriorate Cu loss under via bottom. We inspected Cu recess of two different underlying metals, Cu/1% Mn and Cu/1% Al. FIGURE 3 displayed Cu recesses of Cu/1% Al and Cu/1% Mn underlying metals, respectively. Under the same process condition, the Cu recess of Cu/1% Mn was only half of Cu/1% Al, so Cu/1% Mn was more protective than Cu/1% Al. There was a strong correlation between EM cumulative failure rate and the type of underlying metals. Cu/1% Al showed much lower time to failure (TTF) and deteriorated EM performance as compared with that of Cu/1% Mn. It clearly demonstrated that Cu/1% Mn was more protective than Cu/1% Al, and failure rate of Cu/1% Mn was only 1/30 of Cu/1%. The performance of Cu/1% Al was significantly inferior to that of Cu/1% Mn. Therefore, Cu/1% Al was selected in following tests in order to enhance the differences of other key process variables.

Screen Shot 2018-03-22 at 1.10.15 PM

In the standard (STD) condition, Cu recess was inspected by step-by-step TEM of dry etching and post etching wet clean with discharging gas process, and there were approximately 5nm and 7nm (12nm–5nm=7nm)in depth of Cu loss as shown in FIGURE 4. The following barrier films and Cu-seed process only slightly consumed underlying Cu. The Cu recess only slightly increased 0.3 nm in barrier film deposition process. The pre-cleaning process was necessary before barrier film deposition in order to remove CuO on Cu surface for improved adhesion. Based on observations of Cu recess results in step-by-step TEM, post etching wet clean process also played an important role in Cu recess of via bottom.

Screen Shot 2018-03-22 at 1.10.22 PM

Dry etching by plasma not only eroded about 5nm in depth of Cu under the via bottom but also oxidized the underlying Cu which was supposed to be removed in subsequent wet cleaning process. Post etching wet clean included applying chemical solvent to clean by-product of dry etching and DI water clean to remove the chemical solvent. The DI water was with aid of discharging gas, such as CO2, in order to neutralize the accumulated charge generated by the plasma in previous dry etching. However, the discharging gas acidified the DI water and resulted in Cu loss in post etching wet cleaning process.

FIGURE 5 shows Cu recesses with different dry etching power splits. The change of plasma power split changed the degree of Cu recess. At the condition of 200 W less than STD, i.e., STD-200W, the Cu recess was less than 3nm. Although the structure looks good in shape, poor performance was observed from electrical test and wafer yield after the process was completed. Via open resulted in upper Cu disconnected from underlying Cu as demonstrated by TEM observation (Fig. 5). It was deduced that dry etching process did not etch entire via hole, especially for the dielectric layer. Although post wet cleaning slightly extended the open area under via bottom, barrier films were not well deposited on the via hole. Therefore, poor coating was obtained from the subsequent ECP process. The via resistance marked up significantly as the dry etching power decreased to 200 W less than STD, i.e., STD-200W.

Screen Shot 2018-03-22 at 1.10.30 PM

 

FIGURE 6 shows wafer yields after open/short tests with different dry etching power splits. In the open/short tests, the failure rate was decreased with decreasing the dry etching power from STD+100W to STD-100W due to less damage to the Cu substrate for lower dry etching power. The Cu recess was decreased from 17.9 nm (STD+100W) to 8.7 nm (STD-100W) as demonstrated in FIGURE 5. However, dramatically increased failure rate was observed when the dry etching power was decreased to 200 W less than STD (STD-200 W). Because the lowest dry etching power, STD-200W, was insufficient to enlarge the via hole, and resulted in increasing the via resistance. Therefore, the failure rate of STD-200W was as high as 10% as displayed in Fig. 6. There was an optimal dry etching power of STD-100W in order to maximize the wafer yield in the experiment.

Screen Shot 2018-03-22 at 1.10.37 PM

FIGURE 7 showed the variation of Cu recess with different discharging gas flow splits in the post etching wet cleaning process. The discharging gas flow was strongly related to the Cu recess, and it demonstrated that the chemical property of wet clean also played a crucial role in Cu recess. FIGURE 8 showed that the wafer yield failure rate was decreased with decreasing the post wet clean discharging flow from STD+200 sccm to STD-400 sccm. The major function of discharging gas, CO2, neutralized the accumulated charge generated by the plasma in previous dry etching. It was necessary in post etching wet cleaning process. However,it should be kept below STD-300sccm in order to improve wafer yield in the experiment.

Screen Shot 2018-03-22 at 1.10.47 PM Screen Shot 2018-03-22 at 1.10.55 PM

The reliability test result of HTOL with thermal and electronic stresses over 168 hours showed several good chips transferred to bad ones with open short bin, which was called bin swap. FIB, SEM, TEM and EDS were used in failure analyses. FIGURE 9 showed the comparison of Cu recesses before and after HTOL tests for 168 hours. It was obvious that a deeper Cu recess was observed after stress applied. Before the stress applied, the via interconnect linked with underlying metal line. This is the key reason why it was difficult to detect this type of failure in the electrical test. In Fig. 9, the Cu recess before stress applied was 23.3 nm and it extended into 42.4 nm after HTOL test for 168 hours. The Cu recess extended into twice or even triple after thermal and electronic stresses applied. Therefore, quality of the via bottom joint was greatly deteriorated if there were Cu voids under the via bottom. With increasing applied thermal and electrical stresses to via bottom, the crack propagated to entire via bottom. The via bottom finally was disconnected from underlying metal line. It was so-called via open in semiconductor industry.

Screen Shot 2018-03-22 at 1.11.01 PM

FIGURE 10 showed TEM bright field and EDS mapping of Ta at the failure location after HTOL for 168 hours. Taking a close look at the via bottom next to the interface of underlying metal line, the non-uniform barrier film was widely observed as shown in Fig. 10(a). It was the original failure location. In Fig. 10(a), TEM inspection of the failure location after HTOL test for 168 hours showed significant Cu loss, more than 30 nm, under via bottom. It was much greater than the Cu recess before thermal and electrical stress applied (12 nm). Based on the EDS mapping of Ta (Fig. 10(b)), the barrier film, TaN, was formed adjacent to the Cu loss of via bottom. It was important to note that the TaN was almost disappeared from corner of the via bottom. The disconnection of barrier film from the corner resulted in deteriorated Cu interface, and the Cu began to degen- erate and shrink under applied thermal and electronic stresses. It finally resulted in separation of the upper and underlying Cu. The via bottom was completely opened and caused the failure of device.

Screen Shot 2018-03-22 at 1.11.07 PM

Summary

With the metal line dimensional shrinkage in advanced packaging, Cu metallization has increased the concerns on long-term reliability of devices caused by Cu loss under via bottom. This work explores the effect of underlying metallic alloys and the influence of Cu loss under via bottom after dry etching and wet clean. Important conclusions are listed below:

1. Cu/1% Mn is more protective than original Cu/1% Al. The application of Cu/1% Mn improves both EM and SM resistances of via bottom.

2. Both plasma power of dry etching and the discharging gas flow of wet clean play important roles in the Cu loss under via bottom. Cu loss was initiated first after dry etching due to plasma damage. The plasma not only etched the underlying Cu of via bottom, but also oxidized the underlying Cu surface. Subsequent post etching wet clean with acidic water generated by discharging gas removes CuO at interface, and causes more Cu loss in subsequent wet cleaning process. They are the major mechanism of Cu loss under via bottom. Pre-cleaning of barrier films to remove superficial CuO on Cu for better adhesion is only a minor factor in Cu loss under via bottom.

3. To Improve the Cu loss under via bottom, effective approaches include applying protective metal line, such as Cu/ 1% Mn, minimizing interfacial damage by decreasing the power of dry etching, and the discharge gas flow of post etching.

Acknowledgement

Authors greatly acknowledge the support of Taiwan Semiconductor Manufacturing Company (TSMC) for this study.

References

1. K.Ueno, M.Suzuki, A.Matsumoto, K.Motoyama, T.Tonegawa, N. Ito, K. Arita, Y. Tsuchiya, T. Wake, A. Kubo, K. Sugai, N. Oda, H. Miyamoto, S. Satio, “A high reliability copper dual-damascene interconnection with direct-contact via structure”, 2000 IEDM Tech. Digest IEEE (2000), p. 265.
2. M.H.Tsai,W.J.Tsai,S.L.Shue,C.H.Yu,M.S.Liang,“Reliabilityof dual damascene Cu metallization”, in: Proceedings of the 2000 Inter- national Interconnect Technology Conference, IEEE (2000), p. 214.
3. C.Ryu,K.W.Kwon,A.L.S.Loke,H.Lee,T.Nogami,V.M.Dubin,R.A. Kavari, G.W. Ray, S.S. Wong, “Microstructure and reliability of copper interconnects”, IEEE Trans. Electron Devices 46 (1999), p. 1113.
4. M.H. Tsai, R. Augur, V. Blaschke, R.H. Havemann, E.F. Ogawa, P.S. Ho, W.K. Yeh, S.L. Shue, C.H Yu, M.S. Liang, “Electromigration reliability of dual damascene Cu/CVD SiOC interconnects”, in: Proceedings of the 2001 International Interconnect Technology Conference, IEEE (2001).
5. J.P. Lu, L. Chen, D. Gonzalez, H.L. Guo, D.J. Rose, M. Marudachalam, W.U. Hsu, H.Y. Liu, F. Cataldi, B. Chatterjee, P.B. Smith, P. Holverson, R.L. Guldi, N.M. Russell, G. Shinn, S. Zuhoski, J.D. Luttmer, “Understanding and eliminating defects in electroplated Cu films”, in: Interconnect Technology Conference, Proceedings of the IEEE 2001 International (2001), p. 280.
6. Z.G. Song, S.K. Loh, M. Gunawardana, C.K. Oh, S. Redkar, “Unique defects and analyses with copper damascene process for multilevel metallization”, in: IPFA 2003, Proceedings of the 10th International Symposium on the Physical and Failure Analysis of Integrated Circuits, (2003), p. 12.
7. P. Wrschka, J. Hernandez, G.S. Oehrlein, J.A. Negrych, G. Haag, P. Rau, J.E. Currie, “Development of a slurry employing a unique silica abrasive for the CMP of Cu damascene structures”, J. Electrochem. Soc. 148 (2001), p. 321.
8. T.C. Wang, Y.L. Wang, T.E. Hsieh, S.C. Chang, Y.L. Cheng, “Copper voids improvement for copper dual damascene interconnection process”, J. Phy. Chem. Sol. 69 (2008), p. 566.
9. R.L. Guldi, J.B. Shaw, J. Ritchison, S. Oestreich, K. Davis, R. Fiordalice, “Characterization of copper voids in dual damascene processes”, in: Proceedings of Advanced Semiconductor Manufac- turing 2002 IEEE/SEMI Conference and Workshop (2002), p. 351.
10. J. Reid, V. Bhaskaran, R. Contolini, E. Patton, R. Jackson, E. Broadbent, T. Walsh, S. Mayer, R. Schetty, J. Martin, M. Toben, S. Menard, “Optimization of damascene feature fill for copper electro- plating process”, in: Proceedings of Interconnect Technology, IEEE International Conference (1999), p. 284.
11. G.B. Alers, D. Dornisch, J. Siri, K. Kattige, L. Tam, E. Broadbent, G.W. Ray, “Trade-off between reliability and post-CMP defects during recrystallization anneal for copper damascene interconnects”, in: Reliability Physics Symposium, 2001. Proceedings of the 39th Annual 2001 IEEE International (2001), p. 350.
12. T.C. Wang, T.E. Hsieh, M.T. Wang, D.S. Su, C.H. Chang, Y.L. Wang, J.Y.M. Lee, “Stress migration and electromigration improvement for copper dual damascene interconnection”, J. Electrochem. Soc. 152 (2005), p. 45.
13. J. Koike, M. Haneda, J. Iijima, M. Wada, “Cu alloy metallization for self-forming barrier process”, IEEE Interconnect Technology Conference (2006), p. 161.
14. J. Koike, M. Wada, “Self-forming diffusion barrier layer in Cu-Mn alloy metallization”, App. Phy. Lett. 87, 041911 (2005)
15. J.P. Wang, Y.K. Su, “Effects of surface cleaning on stressvoiding and electromigration of Cu-damascene interconnection”, Trans. Device. Mater. Relia. IEEE (2008), p. 210.

By Jay Chittooran, SEMI Public Policy

Following through on his 2016 campaign promise, President Trump is implementing trade policies that buck conventional wisdom in Washington, D.C. and among U.S. businesses. Stiff tariffs and the dismantling of longstanding trade agreements – cornerstones of these new actions – will ripple through the semiconductor industry with particularly damaging effect. China, a chief target of criticism from President Trump, has again found itself in the crosshairs of the administration, with trade tensions rising to a fever pitch.

The Trump Administration has long criticized China for what it considers unfair trade practices, often zeroing in on intellectual property. In August 2017, the Office of the U.S. Trade Representative (USTR), charged with developing and recommending U.S trade policy to the president, launched a Section 301 investigation into whether China’s practice of forced technology transfer has discriminated against U.S. firms. As the probe continues, it is becoming increasingly clear that the United States will impose tariffs on China based on its current findings. Reports suggest that the tariffs could come soon, hitting a range of products from consumer electronics to toys. Other measures could include tightening restrictions on the trade of dual-use goods – those with both commercial and military applications – curbing Chinese investment in the United States, and imposing strict limits on the number of visas issued to Chinese citizens.

With China a major and intensifying force in the semiconductor supply chain, raising tariffs hangs like the Sword of Damocles over the U.S. and global economies. A tariff-ignited trade war with China could stifle innovation, undermine the long-term health of the semiconductor industry, and lead to unintended consequences such as higher consumer prices, lower productivity, job losses and, on a global scale, a brake on economic growth.

Other recently announced U.S. trade actions could also cloud the future for semiconductor companies. The Trump administration, based on two separate Section 232 investigations claiming that overproduction of both steel and aluminum are a threat to U.S. national security, recently levied a series of tariffs and quotas on every country except Canada and Mexico. While these tariffs have yet to take effect, the mere prospect has angered U.S. trading partners – most notably Korea, the European Union and China. Several countries have threatened retaliatory action and others have taken their case to the World Trade Organization.

Trade is oxygen to the semiconductor industry, which grew by nearly 30 percent last year and is expected to be valued at an estimated $1 trillion by 2030. Make no mistake: SEMI fully supports efforts to buttress intellectual property protections. However, the Trump administration’s unfolding trade policy could antagonize U.S. trade partners.

For its part, SEMI is weighing in with USTR on these issues, underscoring the critical importance of trade to the semiconductor industry as we educate policymakers on trade barriers to industry growth and encourage unobstructed cross-border commerce to advance semiconductors and the emerging technologies they enable. On behalf of our members, we continue our work to increase global market access and lessen the regulatory burden on global trade. If you are interested in more information on trade, or how to be involved in SEMI’s public policy program, please contact Jay Chittooran, Public Policy Manager, at [email protected].

Originally published on the SEMI blog.

SEMICON West, the flagship U.S. event for connecting the electronics manufacturing supply chain, has opened registration for the July 10-12, 2018, exposition at the Moscone Center in San Francisco, California. Building on a year of record-breaking industry growth, SEMICON West 2018 will highlight the engines of future industry expansion including smart transportation, smart manufacturing, smart medtech, smart data, big data, artificial intelligence, blockchain and the Internet of Things (IoT). Click here to register.

Themed BEYOND SMART, SEMICON West 2018 sets it sights on the growing impact of cognitive learning technologies and other industry disruptors with programs and new Smart Pavilions including Smart Manufacturing and Smart Transportation to showcase interactive technologies for immersive, virtual experiences. Each Pavilion will feature a Meet the Experts Theater with an intimate setting for attendees to engage informally with industry thought leaders.

Smart Workforce Pavilion: Connecting Next-Generation Talent with the Microelectronics Industry

The SEMI Smart Workforce Pavilion at SEMICON West 2018 leverages the largest microelectronic manufacturing event in North America to draw the next generation of innovators. Reliant on a highly skilled workforce, the industry today is saddled with thousands of job openings and fierce competition for workers, bringing renewed focus to strengthening its talent pipeline. Educational and engaging, the Pavilion connects the microelectronics industry with college students and entry-level professionals interested in career opportunities.

In the Workforce Pavilion “Meet the Experts” Theater, industry engineers will share insights and inspiration about their personal working experiences and career advisors will offer best practices. Recruiters from top companies will be available for on-the-spot interviews, while career coaches offer mentoring, tips on cover letter and resume writing, job-search guidance, and more. Visitors will learn more about the industry’s vital role in technological innovation in today’s connected world.

This year, SEMI will also host High Tech U (HTU) in conjunction with the SEMICON West Smart Workforce Pavilion. The highly-interactive program supported by Advantest, Edwards, KLA-Tencor and TEL exposes high school students to STEM education pathways and stimulates excitement about careers in the industry.

Free registration with three-day access and shuttle service to SEMICON West are available to all college students. Students are encouraged to register for the mentor program, attend keynotes and tour the exposition hall to see everything the industry has to offer.  To learn more, visit Smart Workforce Pavilion and College Track to preview how students can enter to win a $500 hiring bonus!

Three Ways to Experience the Expo

Attendees can tailor their SEMICON West experience to meet their specific interests. The All-In pass covers every program and event, while the Thought-Leadership and Expo-Only packages offer scaled pricing and program options. Attendees can also purchase select events and programs à la carte, including exclusive IEEE-sponsored sessions, the SEMI Market Symposium, and the STEM Rocks After-hours Party, a fundraising event to support the SEMI Foundation.

The ConFab — an executive invitation-only conference now in its 14th year — brings together influential decision-makers from all parts of the semiconductor supply chain for three days of thought-provoking talks and panel discussions, networking events and select, pre-arranged breakout business meetings.

In the 2018 program, we will take a close look at the new applications driving the semiconductor industry, the technology that will be required at the device and process level to meet new demands, and the kind of strategic collaboration that will be required. It is this combination of business, technology and social interactions that make the conference so unique and so valuable. Browse this slideshow for a look at this year’s speakers, keynotes, panel discussions, and special guests.

Visit The ConFab’s website for a look at the full, three-day agenda for this year’s event.

KEYNOTE: How AI is Driving the New Semiconductor Era

Rama Divakaruni_June_2014presented by Rama Divakaruni, Advanced Process Technology Research Lead, IBM

The exciting results of AI have been fueled by the exponential growth in data, the widespread availability of increased compute power, and advances in algorithms. Continued progress in AI – now in its infancy – will require major innovation across the computing stack, dramatically affecting logic, memory, storage, and communication. Already the influence of AI is apparent at the system-level by trends such as heterogeneous processing with GPUs and accelerators, and memories with very high bandwidth connectivity to the processor. The next stages will involve elements which exploit characteristics that benefit AI workloads, such as reduced precision and in-memory computation. Further in time, analog devices that can combine memory and computation, and thus minimize the latency and energy expenditure of data movement, offer the promise of orders of magnitude power-performance improvements for AI workloads. Thus, the future of AI will depend instrumentally on advances in devices and packaging, which in turn will rely fundamentally on materials innovations.

IC Insights’ latest market, unit, and average selling price forecasts for 33 major IC product segments for 2018 through 2022 is included in the March Update to the 2018 McClean Report (MR18).  The Update also includes an analysis of the major semiconductor suppliers’ capital spending plans for this year.

The biggest adjustments to the original MR18 IC market forecasts were to the memory market; specifically the DRAM and NAND flash segments.  The DRAM and NAND flash memory market growth forecasts for 2018 have been adjusted upward to 37% for DRAM (13% shown in MR18) and 17% for NAND flash (10% shown in MR18).

The big increase in the DRAM market forecast for 2018 is primarily due to a much stronger ASP expected for this year than was originally forecast.  IC Insights now forecasts that the DRAM ASP will register a 36% jump in 2018 as compared to 2017, when the DRAM ASP surged by an amazing 81%.  Moreover, the NAND flash ASP is forecast to increase 10% this year, after jumping by 45% in 2017.  In contrast to strong DRAM and NAND flash ASP increases, 2018 unit volume growth for these product segments is expected to be up only 1% and 6%, respectively.

At $99.6 billion, the DRAM market is forecast to be by far the largest single product category in the IC industry in 2018, exceeding the expected NAND flash market ($62.1 billion) by $37.5 billion.  Figure 1 shows that the DRAM market has provided a significant tailwind or headwind for total worldwide IC market growth in four out of the last five years.

The DRAM market dropped by 8% in 2016, spurred by a 12% decline in ASP, and the DRAM segment became a headwind to worldwide IC market growth that year instead of the tailwind it had been in 2013 and 2014.  As shown, the DRAM market shaved two percentage points off of total IC industry growth in 2016.  In contrast, the DRAM segment boosted total IC market growth last year by nine percentage points. For 2018, the expected five point positive impact of the DRAM market on total IC market growth is forecast to be much less significant than it was in 2017.

Figure 1

Figure 1

A scientific team led by the Department of Energy’s Oak Ridge National Laboratory has found a new way to take the local temperature of a material from an area about a billionth of a meter wide, or approximately 100,000 times thinner than a human hair.

This discovery, published in Physical Review Letters, promises to improve the understanding of useful yet unusual physical and chemical behaviors that arise in materials and structures at the nanoscale. The ability to take nanoscale temperatures could help advance microelectronic devices, semiconducting materials and other technologies, whose development depends on mapping the atomic-scale vibrations due to heat.

From left, Andrew Lupini and Juan Carlos Idrobo use ORNL's new monochromated, aberration-corrected scanning transmission electron microscope, a Nion HERMES to take the temperatures of materials at the nanoscale. Credit: Oak Ridge National Laboratory, US Dept. of Energy; photographer Jason Richards

From left, Andrew Lupini and Juan Carlos Idrobo use ORNL’s new monochromated, aberration-corrected scanning transmission electron microscope, a Nion HERMES to take the temperatures of materials at the nanoscale. Credit: Oak Ridge National Laboratory, US Dept. of Energy; photographer Jason Richards

The study used a technique called electron energy gain spectroscopy in a newly purchased, specialized instrument that produces images with both high spatial resolution and great spectral detail. The 13-foot-tall instrument, made by Nion Co., is named HERMES, short for High Energy Resolution Monochromated Electron energy-loss spectroscopy-Scanning transmission electron microscope.

Atoms are always shaking. The higher the temperature, the more the atoms shake. Here, the scientists used the new HERMES instrument to measure the temperature of semiconducting hexagonal boron nitride by directly observing the atomic vibrations that correspond to heat in the material. The team included partners from Nion (developer of HERMES) and Protochips (developer of a heating chip used for the experiment).

“What is most important about this ‘thermometer’ that we have developed is that temperature calibration is not needed,” said physicist Juan Carlos Idrobo of the Center for Nanophase Materials Sciences, a DOE Office of Science User Facility at ORNL.

Other thermometers require prior calibration. To make temperature graduation marks on a mercury thermometer, for example, the manufacturer needs to know how much mercury expands as the temperature rises.

“ORNL’s HERMES instead gives a direct measurement of temperature at the nanoscale,” said Andrew Lupini of ORNL’s Materials Science and Technology Division. The experimenter needs only to know the energy and intensity of an atomic vibration in a material–both of which are measured during the experiment.

These two features are depicted as peaks, which are used to calculate a ratio between energy gain and energy loss. “From this we get a temperature,” Lupini explained. “We don’t need to know anything about the material beforehand to measure temperature.”

In 1966, also in Physical Review Letters, H. Boersch, J. Geiger and W. Stickel published a demonstration of electron energy gain spectroscopy, at a larger length scale, and pointed out that the measurement should depend upon the temperature of the sample. Based on that suggestion, the ORNL team hypothesized that it should be possible to measure a nanomaterial’s temperature using an electron microscope with an electron beam that is “monochromated” or filtered to select energies within a narrow range.

To perform electron energy gain and loss spectroscopy experiments, scientists place a sample material in the electron microscope. The microscope’s electron beam goes through the sample, with the majority of electrons barely interacting with the sample. In electron energy loss spectroscopy, the beam loses energy as it passes through the sample, whereas in energy gain spectroscopy, the electrons gain energy from interacting with the sample.

“The new HERMES lets us look at very tiny energy losses and even very small amounts of energy gain by the sample, which are even harder to observe because they are less likely to happen,” Idrobo said. “The key to our experiment is that statistical physical principles tell us that it is more likely to observe energy gain when the sample is heated. That is precisely what allowed us to measure the temperature of the boron nitride. The monochromated electron microscope enables this from nanoscale volumes. The ability to probe such exquisite physical phenomena at these tiny scales is why ORNL purchased the HERMES.”

ORNL scientists are constantly pushing the capabilities of electron microscopes to allow new ways of conducting forefront research. When Nion electron microscope developer Ondrej Krivanek asked Idrobo and Lupini, “Wouldn’t it be fun to try electron energy gain spectroscopy?” they jumped at the chance to be the first to explore this capability of their HERMES instrument.

Nanoscale resolution makes it possible to characterize the local temperature during phase transitions in materials–an impossibility with techniques that do not have the spatial resolution of HERMES spectroscopy. For example, an infrared camera is limited by the wavelength of infrared light to much larger objects.

Whereas in this experiment the scientists tested nanoscale environments at room temperature to about 1300 degrees Celsius (2372 degrees Fahrenheit), the HERMES could be useful for studying devices working across a wide range of temperatures, for example, electronics that operate under ambient conditions to vehicle catalysts that perform over 300 C/600 F.