Category Archives: Materials and Equipment

Subtleties in thicknesses between the alternating Cu metal and dielectric layers within a build-up substrate can impact BLR performance.

BY JAIMAL WILLIAMSON, Texas Instruments, Dallas, TX

Managing an organization in an orderly and disciplined manner is known as “running a tight ship.” This mentality and discipline cannot be understated with build-up substrate supplier capability and manufacturing tolerances as it relates reliability and margin in a flip chip ball grid array (FCBGA) device. Build-up substrate technology is the backbone for flip chip packaging due to its ability to bridge high density interconnects and functionality enabling improved electrical performance in tandem with the semiconductor chip. Alternating metal and dielectric layers build up the substrate into the final composite structure. The range of thicknesses of the aforementioned metal and dielectric layers are dependent on associated substrate manufacturer design rules, which can have an impact on board level reliability (BLR). Having a keen awareness of substrate supplier design rules can aid not only troubleshooting, but improve understanding of reliability margin from a chip to package interaction standpoint for any array of commercial and automotive FCBGA applications.

Influence of copper and dielectric layers on reliability

To better understand the thickness variation impact of bottommost substrate copper (Cu) metal (15 +/- 5μm) and dielectric (30 +/-6μm) layers as it relates to strain energy density of BGA solder joint at die shadow area and package corner, a 3×3 factorial design of experiments (DoE) approach (FIGURE 1) was pursued. Through the use of finite elemental modeling, outputs of the study included both strain energy density under -40°C to 125°C and 0°C to 100°C BLR temperature cycle conditions and changes in coefficient of thermal expansion (CTE) as Cu metal and dielectric thicknesses varied. For the remainder of the article, results from the more stringent -40°C to 125°C BLR temperature cycle condition will be discussed.

FIGURE 1. 3x3 factorial DoE.

FIGURE 1. 3×3 factorial DoE.

Rationale of the study was based on a striking difference in BLR performance from two FCBGA daisy chain test vehicles having an identical substrate design, but manufactured at two different substrate suppliers (noted as supplier A and B in this article). The FCBGA daisy chain test vehicle comprises the following package attributes (see FIGURE 2 for a side view example):
• 40mm x 40mm body size
• 8-layer build-up stack (3/2/3)
• 400μm core thickness
• 1mm BGA pitch

FIGURE 2. Example of FCBGA package.

FIGURE 2. Example of FCBGA package.

Weibull analysis was generated from empirical BLR results at 5 percent and 63.2 percent cycles to failure. Specifically, at 5 percent cycles to failure supplier A exhibits ~25 percent reduced BGA solder joint fatigue life than counterparts from supplier B (as illustrated in FIGURES 3 and 4).

FIGURE 3. Weibull plot of supplier A.

FIGURE 3. Weibull plot of supplier A.

FIGURE 4. Weibull plot of supplier B.

FIGURE 4. Weibull plot of supplier B.

In a similar study focusing on component level reliability (CLR), it was observed that bottommost substrate Cu layer thickness can impact stress underneath die shadow area. For these reasons, a more detailed examination was done to measure bottommost substrate Cu layer thickness from daisy chain units of suppliers A and B. Based on package construction analysis, supplier A was found to target the nominal value of 15μm; whereas supplier B targeted the high end of specification at 20μm. These Cu thickness differences would play a significant role in the BLR results.

Stress modeling results

Outputs of the finite elemental modeling are revealed in FIGURE 5 based on inputs from the aforementioned 3×3 factorial DoE illustrated in Fig. 1. Based on the combi- nation of various Cu and dielectric layer thicknesses evaluated, thicker dielectric and Cu layers yield higher macroscopic CTE values. This is an expected trend based on CTE material properties of Cu and dielectric layers in relation to the substrate core material. Simulation results confirmed CTE in ascending order is: dielectric layer > Cu layer > substrate core. Comparing Weibull analysis from supplier A and B (figures 3 and 4), DoE legs 4 and 6 match best, respectively, to the empirical BLR results. In addition, DoE legs 4 and 6 align with the bottommost substrate Cu layer thickness values from the aforemen- tioned package construction analysis measurements. It is noted that based on modeling results, an approximately 2 percent change in CTE can swing the cycles to failure at 63.2 percent by ~11 percent. DoE leg 4 focuses on nominal Cu thickness of 15μm; whereas leg 6 focuses on the high end of the Cu thickness tolerance at 20μm. Dielectric thickness is nominal value of 30μm in both DoE cases. Improved BLR performance from supplier B is attributed to the thicker Cu providing a better CTE match to the BLR test board.

FIGURE 5. Finite elemental modeling results.

FIGURE 5. Finite elemental modeling results.

Use of JMP for statistical perspective

As a supplemental tool for data interpretation, JMP statistical analysis was performed to illustrate how nominal and extreme values of the metal and dielectric layer thickness specification affect FCBGA BLR performance. Analyzing the strain energy data outputs, the model fit well to the predicted values as shown in FIGURE 6. Similarly, CTE correlated well with predicted values as illustrated in FIGURE 7. Use of the prediction profiler function, as illustrated in FIGURE 8, shows CTE is proportional to increase in metal and dielectric thickness, which correlates with the stress modeling results.

FIGURE 6. JMP model of SED predicted vs. actual.

FIGURE 6. JMP model of SED predicted vs. actual.

FIGURE 7. JMP model of CTE predicted vs. actual.

FIGURE 7. JMP model of CTE predicted vs. actual.

FIGURE 8. CTE prediction as a function of metal and dielectric thickness

FIGURE 8. CTE prediction as a function of metal and dielectric thickness

Summary

Subtleties in thicknesses between the alternating Cu metal and dielectric layers within a build-up substrate can impact BLR performance. Two identical daisy chain substrate designs manufactured by different suppliers were compared head to head. A detailed package construction analysis revealed differences in bottommost Cu thickness layer within the substrate. This Cu thickness delta between the two substrate designs caused a change in CTE with supplier B (higher value) than supplier A due to thicker copper. Finite element modeling demon- strated relatively small macroscopic changes in CTE on the order of less than 2 percent can affect cycles to failure by 11 percent.

The key takeaway found from the head to head evaluation was supplier A producing a more stable process as it was able to meet the center point of the Cu thickness specification as compared to supplier B, which was off target. However, in essence, supplier A lost the head to head BLR comparative study with supplier B as its accuracy in meeting the Cu thickness target caused reduced solder joint fatigue life. The typical corrective action would be to work with supplier B to establish better tolerance control in their Cu plating process to stabilize Cu thickness at the center or nominal value like supplier A. However, the lesson learned was to tailor and control the Cu thickness at the higher end of the specification to improve reliability performance. Typically, in any setting the criteria of success is to hit the bullseye or target, which supplier A achieved. Conversely, supplier B missed this mark with results that were skewed to the right. Ironically, because of the skewed results off-target reliability margin was obtained. In reflection of these findings, the adage “success is in the eyes of the beholder” has never been more poignant.

JAIMAL WILLIAMSON is a packaging engineer responsible for development and qualification of Embedding Processing FCBGA devices within Texas Instruments’ Worldwide Semiconductor Packaging group.

CEA-Leti today announced it has signed an agreement with Keysight Technologies, a device-modeling software supplier, to adapt Leti’s UTSOI extraction flow methodology within Keysight’s device modeling solutions for high-volume SPICE model generation.

The simulation of the Leti-UTSOI compact model, which is the first complete compact model dedicated to Ultra-Thin Body and Box and Independent Double Gate MOSFETs, is currently available in Keysight’s modeling and simulation tools. This agreement expands the collaboration to include the extraction flow and will enable device-modeling engineers to efficiently create Leti-UTSOI model cards for use in Process Design Kits (PDKs).

“This collaboration between Leti and Keysight will strengthen the global FD-SOI ecosystem by providing an automatic extraction flow for building model cards associated with the Leti-UTSOI models, which are already available in all the major SPICE simulators,” said Marie Semeria, Leti’s CEO. “This professional, automatic extraction-flow solution will address designers’ needs as they weigh FD-SOI’s benefits over competing solutions for the 28nm technology node and below.”

“Keysight’s modeling solutions provide both automation and flexibility for device modeling,” said Todd Cutler, general manager of Keysight EEsof EDA. “The addition of a Leti-UTSOI modeling technology will further expand our offering in CMOS modeling. We have been collaborating with Leti on many projects, and we are pleased to extend our relationship to improve access to the Leti-UTSOI.”

The Semiconductor Industry Association (SIA) announced worldwide sales of semiconductors reached $29.0 billion for the month of October 2015, 1.9 percent higher than the previous month’s total of $28.4 billion and 2.5 percent lower than the October 2014 total of $29.7 billion. The Americas market posted 3.9 percent growth compared to last month, leading all regions. All monthly sales numbers are compiled by the World Semiconductor Trade Statistics (WSTS) organization and represent a three-month moving average. Additionally, a new WSTS industry forecast projects slight market growth for the next three years.

“Global semiconductor sales have shown signs of stabilizing in recent months, with October marking the third straight month of month-to-month growth,” said John Neuffer, president and CEO, Semiconductor Industry Association. “Year-to-date sales are narrowly ahead of where they were through the same time last year, and slight growth is projected for next year and beyond.”

Month-to-month sales increased across all regional markets: the Americas (3.9 percent), China (1.6 percent), Europe (1.2 percent), Japan (0.4 percent), and Asia Pacific/All Other (1.7 percent). Compared to October 2014, sales were up in China (5.7 percent), but down in the Americas (-5.6 percent), Europe (-9.4), Japan (-10.5 percent), and Asia Pacific/All Other (-2.4 percent).

Additionally, SIA endorsed the WSTS Autumn 2015 global semiconductor sales forecast, which projects the industry’s worldwide sales will reach $336.4 billion in 2015, a 0.2 percent increase from the 2014 sales total. WSTS projects year-to-year increases for 2015 in Asia Pacific (3.9 percent), with decreases projected for the Americas (-0.6 percent), Europe (-8.2 percent), and Japan (-10.3 percent).

Beyond 2015, the global market is expected to grow at a modest pace. WSTS forecasts 1.4 percent growth globally for 2016 ($341.0 billion in total sales) and 3.1 percent growth for 2017 ($351.6 billion). WSTS tabulates its semi-annual industry forecast by convening an extensive group of global semiconductor companies that provide accurate and timely indicators of semiconductor trends.

October 2015

Billions

Month-to-Month Sales                               

Market

Last Month

Current Month

% Change

Americas

5.82

6.05

3.9%

Europe

2.87

2.90

1.2%

Japan

2.69

2.70

0.4%

China

8.45

8.58

1.6%

Asia Pacific/All Other

8.58

8.72

1.7%

Total

28.41

28.96

1.9%

Year-to-Year Sales                          

Market

Last Year

Current Month

% Change

Americas

6.41

6.05

-5.6%

Europe

3.21

2.90

-9.4%

Japan

3.01

2.70

-10.5%

China

8.12

8.58

5.7%

Asia Pacific/All Other

8.94

8.72

-2.4%

Total

29.68

28.96

-2.5%

Three-Month-Moving Average Sales

Market

May/Jun/Jul

Aug/Sept/Oct

% Change

Americas

5.51

6.05

9.7%

Europe

2.83

2.90

2.5%

Japan

2.63

2.70

2.3%

China

8.18

8.58

5.0%

Asia Pacific/All Other

8.71

8.72

0.2%

Total

27.87

28.96

3.9%

“Advanced packaging will reach 44% of packaging services and a revenue of US$ 30 billion by 2020,” Yole Développement (Yole) announced. Overall, the main advanced packaging market is the mobile sector with end products such as smartphones and tablets. Other high volume applications include servers, PC, game stations, external HDD/USB and more.

According to Yole’s latest advanced packaging report entitled “Status of the Advanced Packaging Industry” (2015 Edition), emerging applications are coming from the IoT world, with wearables and home appliances (connected home) solutions already penetrating the market. Other early stage IoT investments have been also made in smart cities, connected cars, industrial devices, medical applications…

In parallel, the Chinese companies play an important role in the advanced packaging market growth: “At Yole, we see an increased activity of Chinese capital in the advanced packaging industry,” explains Andrej Ivankovic, Technology & Market Analyst, Advanced Packaging & Semiconductor Manufacturing at Yole. “The objective of the semiconductor transformation in China is to decrease external dependency and set up a complete internal supply chain that can serve domestic and international customers.”

In this context, what would be the evolution of the advanced packaging industry? What will be the status of the supply chain by 2020? Which packaging technologies will be the most critical tomorrow and after? With the emergence of IoT applications, the development of local Chinese industry and numerous M&A coming from the overall semiconductor industry and the direct impact on the advanced packaging supply chain. Yole’s advanced packaging analysts offer you insight into the new advanced packaging world.

“Status of the Advanced Packaging Industry” report (2015 edition) released by Yole, the “More than Moore” market research and strategy consulting company, provides an high added-value market overview of the industrial landscape; under this new report, Yole’s advanced packaging team proposes a comprehensive analysis of the technology trends and also assesses the future development of the advanced packaging market.

packaging industry graph

This analysis confirms the market positioning of Yole and highlights the knowledge and deep understanding of the company within this industrial field.

According to Yole’s estimates, advanced packaging services revenue will increase by US$9.8 billion from 2014 to 2020 at a CAGR of 7%, in majority due to high volume adoption of Fan-Out WLP, 2.5D/3D and evolution and growth of Fan-In WLP and flip-chip. Advanced packages currently account for 38% of all packaging services or US$ 20.2 billion and are expected to grow share to 44% and US$ 30 billion by 2020.

The mobile sector remains the main advanced packaging market with smartphones and tablets as end products. Other high volume applications include servers, PC, game stations, HDD/USB, WiFi hardware, base stations, TVs and set top boxes. The scent of IoT is spreading with first products already on the market in the form of wearables and smart home appliances. Further early stage investments are made in sectors such as smart cities, connected cars, various industrial devices and medical applications.

The flip-chip platform represents a large mature market and leads in packaging services revenue and wafer count. Fan-In WLP leads in unit count due to small size compared to demanded volume. Adoption of wafer level packages continues. Teardowns performed by Yole and its sister company, System Plus Consulting on 3 high end smartphones (more info on i-micronews.com, reports section or click here directly for iPhone 6+, Samsung Galaxy S6 as well as the Huawei Ascend Mate 7 analysis, that will be available soon) indicated a high penetration rate of WLP, 30% on average. Fan-Out WLP is expected to make a major breakthrough within the next year, likely led by TSMC inFO PoP and followed by other Fan-Out multi die solutions. Long term, a bright future lies ahead for wafer level packages with respect to IoT requirements as they are well position to answer related cost, form and functional integration demands. When it comes to advanced feature sizes, a competitive sub 10 µm / 10 µm arena is established where organic wafer level packages aggressively compete with advanced organic flip-chip substrates and 2.5D / 3D Si/glass interposers.

As WLP pin counts grow, thicknesses and overall cost decrease, the evolution of Fan-In WLP and in particular a breakthrough of Fan-Out WLP are expected to result in a takeover of a part of the flip-chip market. With the breakthrough of Fan-Out WLP, the packaging landscape might drastically change, with an IDM and foundry leading all packaging services by wafer count.

The full advanced packaging analysis is today available; in the report Yole’s analysts present revenue, wafer and unit forecasts per advanced packaging platform and production breakdown by device type such as analog/mixed signal, wireless/RF, logic and memory, CMOS image sensors, MEMS, LED and LCD display drivers.

This year, the Justus Liebig University Giessen awards its Röntgen Prize to Dr. Eleftherios Goulielmakis. The Röntgen Prize is awarded each year in an academic award ceremony for outstanding work on basic research into radiation physics and radiation biology. It is named in memory of Wilhelm Conrad Röntgen, who was a professor in Giessen from 1879 until 1888. The main goal is to distinguish work by young scientists. Half of the € 15,000 prize is donated by Pfeiffer Vacuum and the Dr. Erich Pfeiffer Foundation, and the other half by the Ludwig Schunk Foundation.

This year’s award winner, Dr. Goulielmakis, is currently the head of the research group at the Max Planck Institute of Quantum Optics in Garching near Munich. He is receiving the award for outstanding contributions in the field of attosecond physics and technology with soft X-rays.

In 2005, Dr. Goulielmakis received his doctorate in physics from the Ludwig Maximilian University of Munich, with studies in attosecond physics. These studies formed the basis for his pioneering contributions in this field. After his doctoral thesis, he succeeded in measuring the shortest electromagnetic pulse so far of 8 x 10-17 s. This ultrashort light pulse allows the observation of electron dynamics in atoms and molecules in real time. For the first time,

Dr. Goulielmakis and his team managed to fully characterize the motion of valence electrons in ions in real time with an attosecond pulse (1 attosecond = 10-18 s) in the soft X-ray range. After that, Dr. Goulielmakis and his group developed a “light field synthesizer,” which can manipulate the waveform of a light pulse with attosecond precision. This opens up new methods for controlling electrons with light in the soft X-ray and extreme ultraviolet range with high temporal resolution. Furthermore, Dr. Goulielmakis and his team succeeded in accelerating electrons in solids with ultrafast laser fields, which for the first time allows a coherent emission of photons to be achieved in an extreme ultraviolet spectrum.

On the basis of these research results, ultrashort X-ray pulses can be generated using laser in a special vacuum tube. These pulses make it possible to observe extremely small structures and even, for example, allow electrons to be depicted.

Another application could be light-based circuits, which could increase the computational
speed by a factor of 100,000 compared to current technology. The work of Dr. Goulielmakis contributes to the necessary fundamental understanding for enabling such light-based circuits to be developed in the first place.

Manfred Bender, CEO of Pfeiffer Vacuum Technology AG, congratulated the award winner: “Many research facilities have been a partner to Pfeiffer Vacuum for many years now. Our
vacuum solutions are successfully used at the Max Planck Institute of Quantum Optics in Garching and we are therefore particularly pleased that Dr. Eleftherios Goulielmakis is this year’s Röntgen Prize winner.” Bender explained further: “For 125 years now, Pfeiffer Vacuum has been setting standards in vacuum technology. The company looks back on a success story shaped by a pioneering spirit and passion, which contributed to the technological
progress of industry and science from the very beginning. Therefore, it is very important to us to promote cutting-edge research and, in particular, the next generation.”

Wilfried Glaum, Chairman of Dr. Erich Pfeiffer-Stiftung, the Röntgen Prize winner Dr. Eleftherios Goulielmakis and Manfred Bender, CEO of Pfeiffer Vacuum Technology AG (from left)

Wilfried Glaum, Chairman of Dr. Erich Pfeiffer-Stiftung, the Röntgen Prize winner Dr. Eleftherios Goulielmakis and Manfred Bender, CEO of Pfeiffer Vacuum Technology AG (from left)

Mentor Graphics Corp. announced it is collaborating with GLOBALFOUNDRIES to qualify the Mentor RTL to GDS platform, including the RealTime Designer physical RTL synthesis solution and Olympus-SoC place & route system, for the current version of the GLOBALFOUNDRIES 22FDX platform reference flow. In addition, Mentor and GLOBALFOUNDRIES are working on the development of the Process Design Kit (PDK) for the 22FDX platform. The PDK includes support for the Mentor Calibre platform, covering design rule checking (DRC), layout vs. schematic (LVS) and metal fill solutions for 22FDX. These solutions help mutual customers optimize their designs using the capability of 22FDX technology to manage the power, performance and leakage.

“We are collaborating closely with Mentor Graphics on enabling their products to help customers realize the benefits of the 22FDX platform,” said Pankaj Mayor, vice president of Business Development for GLOBALFOUNDRIES. “The qualification of Mentor tools for implementation flows and design verification will help designers to achieve an optimal balance between power, performance and cost.”

In design flows for advanced technologies, RealTime Designer addresses the need for higher capacity, faster runtimes, improved quality of results (QoR) and integrated floorplanning capabilities. For 22FDX in particular, it offers support for multi-VDD designs based on the Unified Power Format (UPF), multi-Vt optimization, leakage and dynamic power analysis and optimization, and a unique RTL-level floorplanning technology for improved QoR and runtimes. The Olympus-SoC tool comprehensively addresses the performance, capacity, time-to-market, power, and variability challenges encountered at advanced technologies. Support for 22FDX includes low power capabilities such as multi-VDD flow, concurrent multi-corner multi-mode timing and power optimization, forward and reverse bias handling, and DCAP cell insertion on power meshes for noise reduction.

“Our customers are designing some of the most complex chips for mobile, wireless, networking and graphics products,” said Pravin Madhani, general manager of the IC Implementation Group at Mentor Graphics. “Our collaboration with GLOBALFOUNDRIES will enable us to deliver advanced digital implementation flows for the 22FDX platform for our mutual customers.”

The Calibre nmDRC, Calibre nmLVS, and Calibre YieldEnhancer tools provide the verification functionality available in the 22FDX PDK. Core DRC and LVS verification are provided by the Calibre nmDRC and Calibre nmLVS tools, respectively. Calibre YieldEnhancer with SmartFill helps designers meet planarity and density requirements, and minimize post-fill timing changes, by intelligently and automatically filling designs with the optimum distribution and placement of fill shapes.

The next release of the 22FDX PDK will place GLOBALFOUNDRIES differentiated DFM capabilities into the hands of designers. The industry-leading DRC+, Manufacturing Analysis and Scoring (MAS), and Yield Enhancement Services (YES) design kit offerings from GLOBALFOUNDRIES, all based upon the Calibre platform, assist design teams in analyzing the manufacturability impact of their design styles with the 22FDX process technology. The DRC+ methodology uses fast pattern-matching capabilities in the Calibre Pattern Matching tool to identify lithographically problematic patterns, then uses Calibre nmDRC to enforce tighter design constraints where those patterns occur. The MAS and YES methodologies help reduce manufacturing variability: MAS employs the DFM Scoring functionality in Calibre YieldAnalyzer to score IP blocks and SoCs across all layers; in the YES service, GLOBALFOUNDRIES engineers use the layout modification capabilities in Calibre YieldEnhancer to modify edges and via placements to improve the robustness of the layout.

“By incorporating the most advanced Calibre analysis and verification capabilities into its 22FDX platform, GLOBALFOUNDRIES is giving designers the tools they need to build robustness into their products,” said Joseph Sawicki, vice president, Design to Silicon Division at Mentor Graphics. “This not only ensures high-quality designs are delivered for fabrication, but also will help ensure faster ramps to production.”

Mentor Graphics and GLOBALFOUNDRIES are working on supporting advanced extraction and reliability verification sign-off capabilities for Calibre xACT and Calibre PERC solutions.

Intel and ASM look to TCB


November 17, 2015

BY PHIL GARROU, Contributing Editor

In the September column, we looked at some of the key thermo-compression bonding (TCB) papers at ECTC. Is there any question that TCB is real and will be the next big bonding technology? The focus this month is more on this very important new assembly process from Intel and ASM.

Intel introduced TCB into high volume manufacturing in 2014. As substrate and die become thinner and solder bump sizes and pitches get smaller, the thin organic substrate tends to warp at room temp and as the temp is increased during the reflow process. The thin die can also demonstrate temperature dependent warpage, which can come into play during the reflow process. The extent of warpage of the substrate and die at high temperatures can overcome the natural solder surface tension force leading to die misalignment with respect to the substrate, resulting in tilt, non-contact opens (NCO) and in some cases solder ball bridging (SBB). FIGURE 1 shows these various defects.

Phil Garrou

In the Intel TCB process, the substrate with pre-applied flux is held flat on the hot pedestal under vacuum. The die is picked up by the bond head, held securely and flat on the bond head with vacuum. After the die is aligned with the substrate, the bond head comes down and stops when the die touches the substrate. A constant force is then applied while the die is heated up quickly beyond the solidus temperature. As soon as the solder joint melts, the die is moved further down (solder chase) to ensure all solder joints are in contact. The die is held in position allowing the solder to reflow completely, and to wet the bump pads and copper pillars. While the solder is still in the molten state, the bond head retracts upwards controlling the solder joint height. The bond head then releases the vacuum holding the die and moves away as the solder joints have solidified. The major process parameters, i.e temperature, force and displacement are continuously monitored during the TCB bonding process.

Large differences in the CTE between the organic substrate and die results in different magnitude of expansions when heated which can lead to serious bump offset at corners. To minimize the thermal expansion mismatch, the substrate is processed at a lower temperature (e.g. 140°C) while the die and solder is rapidly heated up for reflow and cooled down for solidification using a pulse heater with heating ramp rate exceeding 100°C/s and cooling ramp rate exceeding 50°C/s. This reduces the heat transfer to the substrate. The bulk of the substrate can remain at low temperature and does not expand extensively.

In another ASM paper on TCB they examined what they call liquid phase contact (LPC) TCB. The goal is to increase the throughput of the TCB process. Process flow is shown below. Flux is printed or sprayed on the substrate. Then the bonding head picks up a die from the carrier at an elevated temperature, but below the solder melting point. Then the bonding head is heated up to a temperature higher than the solder melting point and the chip is aligned with the substrate. The chip is then contacted and wetted on the substrate at a predeter- mined bonding height. After a predetermined bonding time, the bonding head can move is cooled down to a temperature below the melting point of solder. They claim this results in attachment of 1200 units/hr vs 600 for the standard TCB flux process.

Vacuum technology trends can be seen over the period of innovation defined by Moore’s Law, particularly in the areas of increasing shaft speed, management of pumping power, and the use computer modeling.

BY MIKE CZERNIAK, Edwards UK, Crawley, England

The sub-fab lies beneath. And down there in that thicket of pipes amidst the hum of vacuum pumps, the sentinel of gas combustors and the pulse of muscular machinery doing real work — innovation has also played a crucial role in enabling Moore’s Law. Without it the glamor boys up top with their bunny suits and FOUPS would not have achieved the marvelous feats of engineering derring-do for which they are so deservedly celebrated.

Vacuum and abatement are two of the most critical functions of the sub-fab. Many process tools require vacuum in the process chamber to permit the process to function. Vacuum pumps not only provide the required vacuum, they also remove unused process gases and by-products. Abatement systems then treat those gasses so they are safe to release or dispose. Vacuum and abatement systems in the sub-fab have had to innovate just as dramatically as the exposure, deposition and etch tools of the fab. In many cases, new processes would not have been possible without new vacuum pumps that could handle new materials and new abatement systems that could make those materials safe for release or disposal.

Moore’s Law

Moore’s Law originated in a paper published in 1965 and titled “Cramming More Components onto Integrated Circuits,” written by Gordon Moore, then director of research and engineering at Fairchild Semiconductor [1]. In it Moore observed that the economics of the integrated circuit manufacturing process defined a minimum cost at a certain number of components per circuit and that this number had been doubling every two years as the manufacturing technology evolved. He believed that the trend would continue for at least the short term, and perhaps as long as ten years. His observation became a mantra for the industry, soon to be known as Moore’s Law (FIGURE 1).

Vaccuum 1

More an astute observation than a law, Moore’s Law is remarkable in several respects. First, the rate of improvement it predicts, doubling every two years, is unheard in any other major industry. In “Moore’s Curse” (IEEE, March 2015) Vaclav Smil calculated historical rates of improvement for a variety of essential indus- tries over the last couple of centuries and found typical rates of a few percent, and order of magnitude less than Moore’s rate [2]. Second, is its longevity. Moore thought it was good for the short term, perhaps as long as ten years. This is perhaps due, at least partly, to the unique role Moore’s Law has assumed within the semicon- ductor industry where it has become both a guide to and driver of the pace of innovation. The Law has become a guiding principle – you shall introduce a new generation with double the performance every two years. It is a rule to live by, enshrined in the industry’s roadmap, and violated only at great peril. Only painfully did Intel recently admit that the doubling period for its latest generation appeared to have stretched to something more like two and a half years [3]. To an extent the Law is a self-fulfilling prophecy, which some have argued works to the detriment of the industry when it forces the release of new processes before they are fully optimized. Whatever you might think of it, the Law’s persistence is remarkable. The literature is full of dire predictions of its demise, all of which, at least so far, have proven premature.

Finally we must ask, what is meant by the names assigned to each new node? What exactly does 14nm, the current state of the art, mean? Although Moore originally described the number of components per integrated circuit, the Law was soon interpreted to apply to the density of transistors in a circuit. This was variously construed. Some measured it as the size of the smallest feature that could be created, which determined the length of the transistor gate. Others pointed to the spacing between the lines of the first layer of metal conductors connecting the transistors, the metal-1 half-pitch. These may have been a fairly accurate measures twenty years ago at the 0.35μm node, but node names have since steadily lost their connection to physical features of the device. It would be difficult to point to any physical dimension at the 14nm node that is actually 14nm. For instance, the FinFET transistor in a 22nm chip is 35nm long and the fin is 8nm wide.

What remains true is that in each successive generation the transistors are smaller and more densely packed and performance is significantly increased. Each generation seems to be named with a smaller number that is approximately 70% of the previous generation, reflecting the fact that a 70% shrink in linear dimension equates to a 50% reduction in area and therefore a nominal doubling in transistor density.

Enabling Moore’s Law in the sub-fab: A brief chronology

In the 1980s, new semiconductor processes and increasing gas flows associated with larger diameter wafers led to problems with aggressive chemicals and solids collecting in the oil used in oil-lubricated “wet” pumps, resulting in short service intervals and high cost of ownership. These were resolved by the development and introduction of oil-free “dry pumps” which have subsequently become the semiconductor industry standard.

Dry rotary pumps require extremely tight running clearances and multiple stages to achieve a practical level of vacuum. Additional cost of these machines, however was more than offset by the benefits offered to semiconductor manufacturing. Dry pumps use a variety of pumping mechanisms — roots, claw, screw and scroll (FIGURE 2).

Vaccuum 2

Many of these are new concepts, but modern machining capabilities made it possible to produce them at a realistic cost, the most notable being Edwards’ introduction of the first oil-free dry pump in the 1980’s. Each pumping mechanism has been successfully deployed and each has its own advantages and disadvantages in a given application. The scroll pump, for example, is unique in its ability to economically scale down to much smaller sizes.

In the early 1990s it became apparent that with the introduction of dry pumps, the pump oil no longer acted as a “wet scrubber” to collect process by-product gases, which therefore passed into the exhaust system. The solution was the development of the Gas Reactor Column (GRC) to chemically capture process exhaust gases in a disposable/recyclable cartridge, minimizing exhaust emissions to the atmosphere.

At about the same, new, more aggressive process gases being used in leading-edge semiconductor processes posed significant challenges for turbo molecular pumps (TMPs) due to the damage they caused to the mechanical bearings used to support their high-speed rotating shafts (typically ~40,000 rpm). Turbo pumps use rapidly spinning blades to impart direction to gas molecules, propelling them through multiple stages of increasing pressure. Early turbo pumps used oil- or grease-lubricated bearings. Similar to the problems encountered with oil sealed rotary pumps, the new process chemicals tended to degrade the oil, frequently causing pumping failures in as little as a few weeks. This problem was solved by introducing magnetic bearings to levitate the pump drive shaft and eliminate the need for lubricating oil.

In the mid-1990s the semiconductor industry started to use perfluorinated compounds (PFC’s) as a convenient source of chamber cleaning and etch gases. However, since only ~30% of the input gas was consumed in the process chamber, there were considerable PFC emissions to the atmosphere. Of particular concern was CF4 due to its half-life of 50,000 years. The solution was the Thermal Processor Unit which offered the first system with proven destruction reaction efficiency (DRE) of 90% or more for CF4.

In the 2000’s safety concerns regarding the increasing use of toxic gases led to increasing concerns about the abatement of these materials before they were released to the environment and the safety of personnel within the fab. Integrated vacuum and abatement systems, where everything is contained in a sealed and extracted enclosure, offer a significant improvement in safety. Integrated systems have since been refined with improvements such as a common control system, reduced footprint and installation costs, and shorter pipelines to reduce operating and maintenance costs.

Abatement systems have continued to evolve. New processes using new materials often require a different approach the abatement. For example, new technologies were developed for high hydrogen processes, copper interconnects and low k dielectrics.

Trends and prospects

Certain vacuum technology trends can be seen over this history of innovation, particularly in the areas of increasing shaft speed, management of pumping power, and the use computer modeling to monitor performance and predict when maintenance will be required so that it can be synchronized with other activities in the fab.

Shaft Speed

When dry pumps were first introduced, they typically operated at around 3,000 to 3,600 rpm. Today’s dry pumps use electric drives to run considerably faster, typically 6,000 rpm for claw, screw, and multi-stage roots pumps (FIGURE 3).

Vaccuum 3

Increasing a pump’s rotational speed delivers a number of advantages. It makes it possible to build more compact pumps and motors, with less internal leakage, which in turn, enables a reduction in the number of pump stages required. It also allows the speed to be reduced when wafers are not being processed, thereby saving energy. Combined, these benefits help reduce the overall pump cost.

Each type of pumping mechanism has different characteristics in the size and shape of volume to fill. A scroll mechanism, with a narrow, ported inlet and long, thin volume space, is one of the slowest pumping mechanisms to fill, so its performance does not increase in proportion to increasing shaft speed. Most scroll pumps operate at just 1500 rpm. A roots mechanism, by contrast, has a very large opening and a short volume length, enabling it to fill quickly allowing efficient use of higher shaft speeds.

The conductance ceiling for roots and screw pumps is probably ~15,000 rpm. Achieving this speed, will require incremental improvements in materials, bearings, and drives. It is likely that we will reach the conductance ceiling for most of the current primary pumping mechanisms within the next decade, although some, such as roots and screw mechanisms, may prove more durable than others.

Turbomolecular pump conductance is governed by blade speed and molecular velocities. Turbo performance has been limited primarily by the maximum speed the bearings and rotor can withstand. The industry is looking for new materials that are lighter and stronger to enable increased speed. While this pump type may be reaching its conductive limit on heavier gases, it is far from reaching it for lighter gases, such as hydrogen. This may take a much longer time to achieve.

Power management

Significant advances have been made in improving the energy efficiency of both vacuum pumps and abatement systems. Improvements in pump design have increased energy efficiency. Variable speed motors and controllers allow better matching of the motor speed to varying pump requirements. Idle mode allows both pumps and abatement systems to go into a low power mode when not in use. Improvements in burner design have reduced the fuel consumption of combustion based abatement. With the increase in concern about environmental impact and carbon foot print continued improvement in this area can be expected.

Modeling

Computer modeling has been applied extensively to all stages of pump performance. Such variables as stage size, running clearance, leakage, and conductance can all be modeled quite effectively. This allows design simulation and the optimization of performance, such as the shape of the power and speed curve. In this way, a pump can be designed for specific duties, such as load lock pumping or processing high hydrogen flows (FIGURE 4).

Vaccuum 4

Vacuum pumps of the future will be more reliable and capable of operating for longer periods of time before requiring maintenance. They will be safer to operate, will occupy less fab space, run cleaner and require less power, as well as generate less noise, vibration, and heat. They will also have improved corrosion resistance and the ability to run hotter when required.

As a result, vacuum pumps will be more environmentally friendly, running cleaner and using less power to help reduce their carbon footprint. In addition, they will likely make much greater use of recycled materials and use fewer consumables, thereby helping to reduce overall pump costs. The pumps will be easier to clean, repair, and rebuild for reuse.

Likely technical developments will also include higher shaft speeds, a growing proliferation of pump mechanisms and combinations of mechanisms to increase performance. Finally, vacuum pumps will incorporate new materials and improved modelling to further sharpen performance and reduce system and operating costs.

References

1. G. Moore, “Cramming more Components onto Integrated Circuits” in Electronics, April 19, 1965.
2. V. Smil, “Moore’s Curse” in IEEE Spectrum, March 19, 2015.
3. R. Courtland, “The Status of Moore’s Law: It’s Complicated” in IEEE Spectrum October 28, 2013.

MIKE CZERNIAK is the Environmental Solutions Business Development Manager, Edwards UK, Crawley, England.

Systematic – and predictive – cost reduction in semiconductor equipment manufacturing

BY TOM MARIANO, Foliage, Burlington, MA

After a period of double-digit growth, the semiconductor equipment industry has now stabilized to the point where recent market forecasts are predicting anemic single-digit growth rates. This is driven by total market demand from chipmakers. For example, despite strong growth of 12.9 percent in 2014, Gartner, Inc. projects worldwide semiconductor capital spending to only grow 0.8 percent in 2015, to $65.7 billion. [1] Additionally, this industry has always been subject to volatile demand cycles that are notoriously difficult to predict.

Translation: It’s extremely challenging for today’s semiconductor equipment manufacturers to improve their financial performance. There are fewer and fewer opportunities to grow topline revenue through innovation and new product development. And, after several years of cutting costs on existing products and not realizing enough cost reduction to improve margins, it’s difficult to know how to do it differently.

Yet a viable alternative to improve financial performance does exist: A disciplined, rigorous, and systematic approach to reducing costs that delivers more predictive results.

A systematic approach to cost reduction

Where cutting costs was once perceived as the end result of “desperate times, desperate measures,” many innovators are now using this approach much more proactively. By
meeting the idea of cost reduction head on – as an opportunity, not a last resort – many semiconductor equipment makers are uncovering wasteful, inefficient, and costly processes, often in areas they once overlooked. At this point, you may be thinking, “All of this sounds great, but what is a systematic approach to cost reduction, and how is it different from what I’m doing?”

Remember that many manufacturers (in all industries) tend to have a hard time driving costs down. They may set cost reduction goals and then attempt to achieve them using various ad hoc approaches. But they really need to understand exactly what their true costs are, where they exist, and which areas will improve their margins.

A systematic approach to cost reduction gives them this insight. With improved visibility into the entire organization, various processes, and how they execute, semiconductor equipment manufacturers can’t identify the right places to cut costs and hit their cost savings goals. This is a very detailed and planned approach in which organizations closely examine areas such as cost of goods sold, R&D, and service to make more informed decisions that will position their business for long-term success. This is the value of a systematic approach to cost reduction.

This approach also introduces the element of speed, helping equipment makers realize cost savings much faster than ad hoc cost-cutting initiatives and puts them on a path to achieve more predictive results. Beyond the positive (and more obvious) impact successful cost reduction has on a semiconductor equipment manufacturer’s bottom line, it also provides a number of significant benefits such as improving productivity, freeing up key personnel, and providing needed capital to fuel new growth.

The path to predictive results

Even if the concept of a more strategic approach to cutting costs sounds reasonable, many semiconductor equipment manufacturers struggle with how to begin and where to focus. All to often they resort to making reactive decisions regarding existing products without the necessary data, leading them to ask questions such as, “Should we have an obsolescence plan for this product?” “How much could we save?” and “Will this lead to bigger problems down the road?”

Without understanding where your best opportunities for cost cutting are, it’s a lot larder to predict when, and if, cost reduction goals will be met. A systematic approach to cost reduction includes establishing clear cost targets, communicating them to leadership, and measuring and reporting results along the way.

The first step is to engage with an outside firm that has a singular focus on cost reduction, and one that is clearly separated from day-to-day operations and current organizational dynamics. Such an engagement will yield an actionable list of improvements with specific cost targets, realistic timelines for achieving these goals, and future plans for reinvesting the cost savings.

More specifically, a systematic cost reduction approach will focus on three key areas: material costs, R&D costs, and service costs:

1. Material costs: The bill of materials is one of the most common ways to see all the components needed to produce the end product. But this goes well beyond the pure cost of materials. Research has shown that improving the way these components are managed can affect 80-90% of the product’s total costs.[2]

For semiconductor equipment manufacturers, the cost reduction process should start with the selection of the products or sub-assemblies that have the highest potential for savings. Focus on those products that are still generating significant revenue, but may not be receiving much attention in terms or engineering upgrades and enhancements. Thoroughly examine the bill of materials for these products by addressing materials, design, complexity reduction, the potential to create common assemblies, and more.

Value engineering efforts can simultaneously improve product functionality and performance while reducing bill of material costs. This effort should factor in ways to meet RoHS requirements and when to make end-of-life decisions for various electrical components to improve design efficiency and the effectiveness of the product.

A realistic cost reduction goal can then be created and a resulting value-engineering project can commence, often using low-cost offshore resources to best achieve those savings.

2. R&D costs: Making better decisions related to R&D processes and product development can shave considerable costs. Some areas to focus on include:

• When to officially end of life non-performing products
• When to consolidate products, or possibly even entire R&D departments
• When and how to move sustaining engineering efforts offshore, or to other lower-cost alternatives

The critical next step is to look at all products and all product variations to determine if an official end-of-life program should be employed. These decisions are notoriously hard to make and often require difficult conversations with key customers, but they are necessary nonetheless.

Many semiconductor equipment manufacturers have grown through acquisitions, creating redundant engineering groups that can be eliminated or downsized. Performing an organizational analysis of all R&D activities may uncover opportunities to consolidate and combine functions or create centers of excellence that focus on specific technical areas eliminating redundancies of technical specialty.

3. Service costs: Examine engineering and design processes to find ways to improve performance, reliability, and costs. For example, adding data collection technology or product diagnostics to enhance remote support efforts and predictive maintenance.

Improvement of product reliability is usually a large multiplier when it comes to service and spare parts costs. Collect and analyze field data to find the most significant issues driving service costs and then look to cut where possible.

For example, equipment in the field often does not have the capability to report enough information to effectively identify a problem. Adding increased data logging and communication can be used to clarify machine status and point services in the right direction. Connectivity can also help with remote diagnostics, all of which helps reduce costs, uptime, and customer satisfaction.

Cost Reduction as a Competitive Advantage

Short-term market forecasts will continue to make it challenging for semiconductor equipment manufacturers to deliver improved financial results. Yet the concept of a systematic approach to cost reduction is a proven way for them to proactively cut costs – in the right places – and also make better decisions related to existing products and other business systems and processes.

By taking a disciplined, rigorous, and objective look at any and all parts of their organization, semiconductor equipment makers can capitalize on new opportunities to free valuable resources, improve processes and future technology, and reinvest savings for future growth. For many equipment manufacturers the greatest obstacle to successfully exploiting these opportunities is insufficient experience and expertise with a disciplined and unconventional way of approaching cost reduction projects. A systematic approach to cost reduction will be the key to success for companies looking to improve their competitive advantage.

References

1. Gartner, Inc., “Gartner Says Worldwide Semiconductor Capital Spending to Increase 0.8 Percent in 2015: Conser- vative Investment Strategies Paving the Way to Slower Growth in 2015,” January 13, 2015. http://www.gartner. com/newsroom/id/2961017.

2. Forbes, “Product Lifecycle Management: A New Path to Shareholder Value?” August 5, 2011, http://www. forbes.com/sites/ciocentral/2011/08/05/product-lifecycle- management-a-new-path-to-shareholder-value/.

A recent report from Navigant Research analyzes the global market opportunity for residential Internet of Things (IoT) devices, including forecasts for shipments, installed base, and revenue, segmented by region and device type, through 2025.

According to the report, global revenue from shipments of these residential IoT devices is expected to total more than $330 billion from 2015 to 2025. The report also concludes that yearly revenue will grow from $7.3 billion in 2015 to $67.7 billion in 2025. Through devices such as smart thermostats that allow users to remotely control household temperatures or LED lights that can be switched on and off from a smartphone, the much-hyped IoT concept has arrived in the residential setting. Major companies are beginning to recognize the opportunity that these communicating devices offer for increased efficiency, automation, security, and comfort in the home.

“The IoT is like putting together a jigsaw puzzle without any edge pieces, with the number of pieces growing exponentially into the billions,” says Neil Strother, principal research analyst with Navigant Research. “Communicating devices in the IoT traverse a wide range of industries and sectors—virtually all areas of life can expect to see some form of this connected world.”

Despite the many drivers for the residential IoT market, there are at present multiple protocols and standards that are creating an interoperability barrier, according to the report. Wi-Fi, ZigBee, Bluetooth, and others are all vying for market viability, which is creating confusion for consumers and stalling overall adoption.

The report, IoT (Internet of Things) for Residential Customers, defines the emerging residential IoT market and examines the global market opportunity related to IoT technologies. The study provides an analysis of the key market drivers and barriers associated with residential IoT devices, including smart meters, smart thermostats, lighting, smart appliances, security and management systems, and smart plugs. Global market forecasts for shipments, installed base, and revenue, segmented by region and device type, extend through 2025. The report also examines the key technologies related to residential IoT devices, as well as the competitive landscape. An Executive Summary of the report is available for free download on the Navigant Research website.