Category Archives: MEMS

STATS ChipPAC Ltd., a provider of advanced semiconductor packaging and test services, today announced that Cavendish Kinetics, a provider of high performance RF MEMS tuning solutions for LTE smartphones and wearable devices, has adopted its advanced wafer level packaging technology to deliver Cavendish’s SmarTune RF MEMS tuners in the smallest possible form factor, as a 2mm2 chip scale package.

LTE smartphone original equipment manufacturers (OEMs) are rapidly adopting antenna tuning solutions to be able to provide the required signal strength across the large number of LTE spectrum bands used globally. Cavendish’s SmarTune RF MEMS tuners outperform traditional RF silicon-on-insulator (SOI) switch-based antenna tuning solutions by 2-3dB, resulting in much higher data rates (up to 2x) and improved battery life (up to 40 percent). Cavendish RF MEMS tuner shipments are ramping aggressively and can now be found in six different smartphone models across China, Europe and North America, with many additional designs in development.

“Our RF MEMS tuners present demanding packaging requirements, including the need to deliver the smallest possible form factor in a process that protects the integrity of our hermetically sealed MEMS structure,” said Atul Shingal, Executive Vice President of Operations, Cavendish Kinetics. “STATS ChipPAC’s wafer level packaging platform provided advantages in package size, performance and scalability, and a proven, cost effective manufacturing process that supports our accelerating volume production.”

STATS ChipPAC provides a comprehensive platform of wafer level technology from Fan-in Wafer Level Packaging (FIWLP) to highly integrated Fan-out Wafer Level Packaging (FOWLP) solutions known as embedded Wafer Level Ball Grid Array (eWLB). Cavendish Kinetics and STATS ChipPAC are jointly working to utilize the inherent benefits of wafer level packaging technology to drive further RF antenna tuning innovations for the smartphone market.

“Through our successful partnership, Cavendish Kinetics has been able to implement their current generation industry leading MEMS-based antenna tuning solution. In future products, we will be able to provide Cavendish Kinetics with options for greater functional integration and silicon partitioning capabilities that are only feasible with our industry leading fan-out eWLB technology,” said Dr. Rajendra Pendse, Vice President and Chief Marketing Officer, STATS ChipPAC.

Imagination Technologies announces that South Korea based MEMS sensor development company Standing Egg has licensed Imagination’s MIPS Warrior M-class CPU for use in its next-generation sensor hubs targeting an expanding range of products including mobile devices, IoT, wearables, and automotive.

Standing Egg develops MEMS sensor products including accelerometers, gyroscopes, pressure sensors, and others. With its planned MIPS-based sensor hub chips, modules and boards, Standing Egg will provide a means to integrate and process data from these different sensors.

In its selection of an MCU-class CPU for its next-generation products, Standing Egg compared MIPS CPUs to other competing CPU IP cores. Performance, power and area were key decision criteria, and according to Standing Egg, the MIPS M5100 surpassed other CPUs on these metrics. Standing Egg also determined that the MIPS M5100 CPU can process sensor signals faster and with lower power – an important design consideration for the company. The security features in the MIPS CPU, including anti-tamper technology, also played a key role in the decision.

Jongsung Lee, CEO, Standing Egg, says: “Standing Egg’s sensors are designed and built by us so we control every element from concept to packaging. When it came to selecting a CPU, we chose MIPS for its outstanding performance efficiency and features. Its ability to handle sophisticated algorithms as well as its signal processing capability make the MIPS M5100 ideal for our next-generation sensor hubs, and we already have interest in these products from several customers. Imagination’s full line of IP, including connectivity IP and cloud services, is quite appealing for sensor hub applications.”

Says Jim Nicholas, vice president, MIPS business operations, Imagination Technologies: “Standing Egg is innovating in MEMS sensor development across design, manufacturing, testing and packaging, and we are delighted that they have chosen MIPS for their next design. This is one example of the increasing traction that we are seeing for MIPS across Asia and around the globe. IoT and wearables are particularly hot areas for MIPS Warrior CPUs, as companies look for leading-edge CPUs with features like hardware virtualization, enhanced security and hardware multi-threading that will give them an edge in designing their next-generation devices.”

Standing Egg plans to release sensor hub products based on MIPS M5100 CPU in the second half of 2015, with an FPGA version available in advance.

Standing Egg is a professional MEMS (Micro Electro Mechanical Systems) sensor development company located in Korea.

BY JOE CESTARI, Total Facility Solutions, Plano, Texas

When the commercial semiconductor manufacturing industry decides to move to the next wafer size of 450mm, it will be time to re-consider equipment and facilities strategies. Arguably, there is reason to implement new strategies for any new fab to be built regardless of the substrate size. In the case of 450mm, if we merely scale up today’s 300mm layouts and operating modes, the costs of construction would more than double. Our models show that up to 25 percent of the cost of new fab construction could be saved through modular design and point-of-use (POU) facilities, and an additional 5-10 percent could be saved by designing for “lean” manufacturing.

In addition to cost-savings, these approaches will likely be needed to meet the requirements for much greater flexibility in fab process capabilities. New materials will be processed to form new devices, and changes in needed process-flows and OEM tools will have to be accommodated by facilities. In fact, tighter physical and data integration between OEM tools and the fab may result in substantially reduced time to first silicon, ongoing operating costs and overall site footprint.

POU utilities with controls close to the process chambers, rather than in the sub-fab, have been modeled as providing a 25-30 percent savings on instrumentation and control systems throughout the fab. Also, with OEM process chamber specifications for vacuum-control and fluid-purity levels expected to increase, POU utilities provide a flexible way to meet future requirements.

Reduction of fluid purity specifications on central supply systems in harmony with increases in localized purification systems for OEM tools can also help control costs, improve flexibility, and enhance operating reliability. There are two main reasons why our future fabs will need much greater flexibility and intelligence in facilities: high-mix production, and 1-12 wafer lots.

High-mix production

Though microprocessors and memory chips will continue to increase in value and manufacturing volumes, major portions of future demand for ICs will be SoCs for mobile applications. The recently announced “ITRS 2.0”—the next roadmap for the semicon- ductor fab industry after the “2013” edition published early in 2014—will be based on applications solutions and less on simple shrinks of technology. Quoting Gartner Dataquest”s assessment:

System-on-chip (SoC) is the most important trend to hit the semiconductor industry since the invention of microprocessors. SoC is the key technology driving smaller, faster, cheaper electronic systems, and is highly valued by users of semiconductors as they strive to add value to their products.”

1-12 Wafer Lots

The 24-wafer lot may remain the most cost-effective batch size for low-mix fabs, but for high-mix lines 12-wafer lots are now anticipated even for 300mm wafers. For 450mm wafers, the industry needs to re-consider “the wafer is the batch” as a manufacturing strategy. The 2013 ITRS chapter on Factory mentions in Table 5 that by the year 2019 “Single Wafer Lot Manufacturing System as an option” will likely be needed by some fabs. Perhaps a 1-5 wafer carrier and interface would be a way for an Automated Material Handling System (AMHS) to link discrete OEM tools as an evolution of current 300mm FOUP designs.

However, a true single-wafer fab line would be the realization of a revolution started over twenty years ago when the MMST Program was a $100M+ 5-year R&D effort funded by DARPA, the U.S. Air Force, and Texas Instruments, which developed a 0.35μm double-level-metal CMOS fab technology (with a three-day cycle time). In the last decade BlueShift Technologies was started and stopped to provide such revolutionary technology for vacuum-robot-lines to connect single-wafer chambers all with a common physical interface.

Lean manufacturing approaches should work well with high-mix product fabs, in addition to providing more efficient consumption of consumables in general. In specific, when lean manufacturing is combined with small batch sizes—minimally the single wafer—there is tremendous improvement in cycle-time.

MIT researchers have developed a new, ultrasensitive magnetic-field detector that is 1,000 times more energy-efficient than its predecessors. It could lead to miniaturized, battery-powered devices for medical and materials imaging, contraband detection, and even geological exploration.

Magnetic-field detectors, or magnetometers, are already used for all those applications. But existing technologies have drawbacks: Some rely on gas-filled chambers; others work only in narrow frequency bands, limiting their utility.

Synthetic diamonds with nitrogen vacancies (NVs) — defects that are extremely sensitive to magnetic fields — have long held promise as the basis for efficient, portable magnetometers. A diamond chip about one-twentieth the size of a thumbnail could contain trillions of nitrogen vacancies, each capable of performing its own magnetic-field measurement.

The problem has been aggregating all those measurements. Probing a nitrogen vacancy requires zapping it with laser light, which it absorbs and re-emits. The intensity of the emitted light carries information about the vacancy’s magnetic state.

“In the past, only a small fraction of the pump light was used to excite a small fraction of the NVs,” says Dirk Englund, the Jamieson Career Development Assistant Professor in Electrical Engineering and Computer Science and one of the designers of the new device. “We make use of almost all the pump light to measure almost all of the NVs.”

The MIT researchers report their new device in the latest issue of Nature Physics. First author on the paper is Hannah Clevenson, a graduate student in electrical engineering who is advised by senior authors Englund and Danielle Braje, a physicist at MIT Lincoln Laboratory. They’re joined by Englund’s students Matthew Trusheim and Carson Teale (who’s also at Lincoln Lab) and by Tim Schröder, a postdoc in MIT’s Research Laboratory of Electronics.

Telling absence

A pure diamond is a lattice of carbon atoms, which don’t interact with magnetic fields. A nitrogen vacancy is a missing atom in the lattice, adjacent to a nitrogen atom. Electrons in the vacancy do interact with magnetic fields, which is why they’re useful for sensing.

When a light particle — a photon — strikes an electron in a nitrogen vacancy, it kicks it into a higher energy state. When the electron falls back down into its original energy state, it may release its excess energy as another photon. A magnetic field, however, can flip the electron’s magnetic orientation, or spin, increasing the difference between its two energy states. The stronger the field, the more spins it will flip, changing the brightness of the light emitted by the vacancies.

Making accurate measurements with this type of chip requires collecting as many of those photons as possible. In previous experiments, Clevenson says, researchers often excited the nitrogen vacancies by directing laser light at the surface of the chip.

“Only a small fraction of the light is absorbed,” she says. “Most of it just goes straight through the diamond. We gain an enormous advantage by adding this prism facet to the corner of the diamond and coupling the laser into the side. All of the light that we put into the diamond can be absorbed and is useful.”

Covering the bases

The researchers calculated the angle at which the laser beam should enter the crystal so that it will remain confined, bouncing off the sides — like a tireless cue ball ricocheting around a pool table — in a pattern that spans the length and breadth of the crystal before all of its energy is absorbed.

“You can get close to a meter in path length,” Englund says. “It’s as if you had a meter-long diamond sensor wrapped into a few millimeters.” As a consequence, the chip uses the pump laser’s energy 1,000 times as efficiently as its predecessors did.

Because of the geometry of the nitrogen vacancies, the re-emitted photons emerge at four distinct angles. A lens at one end of the crystal can collect 20 percent of them and focus them onto a light detector, which is enough to yield a reliable measurement.

The global semiconductor materials market increased 3 percent in 2014 compared to 2013 while worldwide semiconductor revenues increased 10 percent. Revenues of $44.3 billion mark the first increase in the semiconductor materials market since 2011.

Total wafer fabrication materials and packaging materials were $24.0 billion and $20.4 billion, respectively. Comparable revenues for these segments in 2013 were $22.7 billion for wafer fabrication materials and $20.4 billion for packaging materials. The wafer fabrication materials segment increased 6 percent year-over-year, while the packaging materials segment remained flat. However, if bonding wire were excluded from the packaging materials segment, the segment increased more than 4 percent last year. The continuing transition to copper-based bonding wire from gold is negatively impacting overall packaging materials revenues.

For the fifth consecutive year, Taiwan was the largest consumer of semiconductor materials due to its large foundry and advanced packaging base, totaling $9.8 billion. Japan claimed the second spot during the same time. Annual revenue growth was the strongest in the Taiwan market. The materials market in North America had the second largest increase at 5 percent, followed by China, South Korea and Europe. The materials markets in Japan and Rest of World were flat relative to 2013 levels. (The ROW region is defined as Singapore, Malaysia, Philippines, other areas of Southeast Asia and smaller global markets.)

Region 2013 2014 % Change
Taiwan

8.91

9.58

8%

Japan

7.17

7.19

0%

South Korea

6.87

7.03

2%

Rest of World

6.64

6.66

0%

China

5.66

5.83

3%

North America

4.76

4.98

5%

Europe

3.04

3.08

1%

Total

43.05

44.35

3%

Source: SEMI, April 2015
Note: Figures may not add due to rounding.

The Material Market Data Subscription (MMDS) from SEMI provides current revenue data along with seven years of historical data and a two-year forecast.

The Semiconductor Industry Association (SIA), representing U.S. leadership in semiconductor manufacturing and design, today announced worldwide sales of semiconductors reached $27.8 billion for the month of February 2015, an increase of 6.7 percent from February 2014 when sales were $26.0 billion. Global sales from February 2015 were 2.7 percent lower than the January 2015 total of $28.5 billion, reflecting seasonal trends. Regionally, sales in the Americas increased by 17.1 percent compared to last February to lead all regional markets. All monthly sales numbers are compiled by the World Semiconductor Trade Statistics (WSTS) organization and represent a three-month moving average.

“The global semiconductor industry maintained momentum in February, posting its 22nd straight month of year-to-year growth despite macroeconomic headwinds,” said John Neuffer, president and CEO, Semiconductor Industry Association. “Sales of DRAM and Analog products were particularly strong, notching double-digit growth over last February, and the Americas market achieved its largest year-to-year sales increase in 12 months.”

Regionally, year-to-year sales increased in the Americas (17.1 percent) and Asia Pacific (7.6 percent), but decreased in Europe (-2.0 percent) and Japan (-8.8 percent). Sales decreased compared to the previous month in Europe (-1.6 percent), Asia Pacific (-2.2 percent), Japan (-2.3 percent), and the Americas (-4.4 percent).

“While we are encouraged by the semiconductor market’s sustained growth over the last two years, a key driver of our industry’s continued success is free trade,” Neuffer continued. “A legislative initiative called Trade Promotion Authority (TPA) has paved the way for opening markets to American goods and services for decades, helping to give life to nearly every U.S. free trade agreement in existence, but it expired in 2007. With several important free trade agreements currently under negotiation, Congress should swiftly re-enact TPA.”

February 2015
Billions
Month-to-Month Sales
Market Last Month Current Month % Change
Americas 6.51 6.23 -4.4%
Europe 2.95 2.90 -1.6%
Japan 2.62 2.56 -2.3%
Asia Pacific 16.47 16.10 -2.2%
Total 28.55 27.79 -2.7%
Year-to-Year Sales
Market Last Year Current Month % Change
Americas 5.32 6.23 17.1%
Europe 2.96 2.90 -2.0%
Japan 2.81 2.56 -8.8%
Asia Pacific 14.96 16.10 7.6%
Total 26.04 27.79 6.7%
Three-Month-Moving Average Sales
Market Sep/Oct/Nov Dec/Jan/Feb % Change
Americas 6.53 6.23 -4.6%
Europe 3.19 2.90 -9.2%
Japan 2.93 2.56 -12.7%
Asia Pacific 17.12 16.10 -6.0%
Total 29.77 27.79 -6.7%

Consider these eight issues where the packaging team should be closely involved with the circuit design team.

BY JOHN T. MACKAY, Semi-Pac, Inc., Sunnyvale, CA

Today’s integrated circuit designs are driven by size, performance, cost, reliability, and time- to-market. In order to optimize these design drivers, the requirements of the entire system should be considered at the beginning of the design cycle—from the end system product down to the chips and their packages. Failure to include packaging in this holistic view can result in missing market windows or getting to market with a product that is more costly and problematic to build than an optimized product.

Chip design

As a starting consideration, chip packaging strategies should be developed prior to chip design completion. System timing budgets, power management, and thermal behavior can be defined at the beginning of the design cycle, eliminating the sometimes impossible constraints that are given to the package engineering team at the end of the design. In many instances chip designs end up being unnecessarily difficult to manufacture, have higher than necessary assembly costs and have reduced manufacturing yields because the chip design team used minimum design rules when looser rules could have been used.

Examples of these are using minimum pad-to-pad spacing when the pads could have been spread out or using unnecessary minimum metal to pad clearance (FIGURE 1). These hard taught lessons are well understood by the large chip manufacturers, yet often resurface with newer companies and design teams that have not experienced these lessons. Using design rule minimums puts unnecessary pressure on the manufacturing process resulting in lower overall manufacturing yields.

Packaging 1

FIGURE 1. In this image, the bonding pads are grouped in tight clusters rather than evenly distributed across the edge of the chip. This makes it harder to bond to the pads and requires more-precise equipment to do the bonding, thus unnecessarily increasing the assembly cost and potentially impacting device reliability.

Packaging

Semiconductor packaging has often been seen as a necessary evil, with most chip designers relying on existing packages rather than package customization for optimal performance. Wafer level and chipscale packaging methods have further perpetuated the belief that the package is less important and can be eliminated, saving cost and improving performance. The real fact is that the semiconductor package provides six essential functions: power in, heat out, signal I/O, environmental protection, fan-out/compatibility to surface mounting (SMD), and managing reliability. These functions do not disappear with the implementation of chipscale packaging, they only transfer over to the printed circuit board (PCB) designer. Passing the buck does not solve the problem since the PCB designers and their tools are not usually expected to provide optimal consideration to the essential semiconductor die requirements.

Packages

Packaging technology has considerably evolved over the past 40 years. The evolution has kept pace with Moore’s Law increasing density while at the same time reducing cost and size. Hermetic pin grid arrays (PGAs) and side-brazed packages have mostly been replaced by the lead-frame-based plastic quad flat packs (QFP). Following those developments, laminate based ball grid arrays (BGA), quad flat pack no leads (QFN), chip scale and flip-chip direct attach became the dominate choice for packages.

The next generation of packages will employ through-silicon vias to allow 3D packaging with chip-on-chip or chip-on-interposer stacking. Such approaches promise to solve many of the packaging problems and usher in a new era. The reality is that each package type has its benefits and drawbacks and no package type ever seems to be completely extinct. The designer needs to have an in-depth understand of all of the packaging options to determine how each die design might benefit or suffer drawbacks from the use of any particular package type. If the designer does not have this expertise, it is wise to call in a packaging team that possesses this expertise.

Miniaturization

The push to put more and more electronics into a smaller space can inadvertently lead to unnec- essary packaging complications. The ever increasing push to produce thinner packages is a compromise against reliability and manufacturability. Putting unpackaged die on the board definitely saves space and can produce thinner assemblies such as smart card applications. This chip-on-board (COB) approach often has problems since the die are difficult to bond because of their tight proximity to other components or have unnecessarily long bond wires or wires at acute angles that can cause shorts as PCB designers attempt to accommodate both board manufacturing line and space realities with wire bond requirements.

Additionally, the use of minimum PCB design rules can complicate the assembly process since the PCB etch-process variations must be accommodated. Picking the right PCB manufacturer is important too as laminate substrate manufacturers and standard PCB shops are most often seen as equals by many users. Often, designers will use material selections and metal systems that were designed for surface mounting but turn out to be difficult to wire bond. Picking a supplier that makes the right metallization tradeoffs and process disciplines is important in order to maximize manufacturing yields

Power

Power distribution, including decoupling capaci- tance and copper ground and power planes have been mostly a job for the PCB designer. This is a wonder to most users as to why decoupling is rarely embedded into the package as a complete unit. Cost or package size limitations are typically the reasons cited as to why this isn’t done. The reality is that semiconductor component suppliers usually don’t know the system requirements, power fluctuation tolerance and switching noise mitigation in any particular installation. Therefore power management is left to the system designer at the board level.

Thermal Management

Miniaturization results in less volume and heat spreading to dissipate heat. Often, there is no room or project funds available for heat sinks. Managing junction temperature has always been the job of the packaging engineer who must balance operating and ambient temperatures and packaging heat flow.

Once again, it is important to develop a thermal strategy early in the design cycle that includes die specifics, die attachment material specification, heat spreading die attachment pad, thermal balls on BGA and direct thermal pad attachment during surface mount.

Signal input/output

Managing signal integrity has always been the primary concern of the packaging engineer. Minimizing parasitics, crosstalk, impedance mismatch, transmission line effects and signal atten- uation are all challenges that must be addressed. The package must handle the input/output signal requirements at the desired operating frequencies without a significant decrease in signal integrity. All packages have signal characteristics specific to the materials and package designs.

Performance

There are a number of factors that impact perfor- mance including: on-chip drivers, impedance matching, crosstalk, power supply shielding, noise and PCB materials to name a few. The performance goals must be defined at the beginning of the design cycle and tradeoffs made throughout the design process.

Environmental protection

The designer must also be aware that packaging choices have an impact on protecting the die from environmental contamination and/or damage. Next- generation chip-scale packaging (CSP) and flip chip technologies can expose the die to contami- nation. While the fab, packaging and manufacturing engineers are responsible for coming up with solutions that protect the die, the design engineer needs to understand the impact that these packaging technologies have on manufacturing yields and long-term reliability.

Involve your packaging team

Hopefully, these points have provided some insights on how packaging impacts many aspects of design and should not be relegated to just picking the right package at the end of the chip design. It is important that your packaging team be involved in the design process from initial specification through the final design review.

In today’s fast moving markets, market windows are shrinking so time to market is often the important differentiator between success and failure. Not involving your packaging team early in the design cycle can result in costly rework cycles at the end of the project, having manufacturing issues that delay the product introduction or, even worse, having impossible problems to solve that could have been eliminated had packaging been considered at the beginning of the design cycle.

System design incorporates many different design disciplines. Most designers are proficient in their domain specialty and not all domains. An important byproduct of these cross-functional teams is the spreading of design knowledge throughout the teams, resulting in more robust and cost effective designs.

Programme information is now available on the inaugural SEMICON Southeast Asia, which will run from 22–24 April at SPICE in Penang. The event, organized by SEMI, a global industry association, features an expanded programme and larger audience base focusing on Southeast Asia communities in the semiconductor and microelectronics sector.  SEMI estimates spending of US$19 on semiconductor equipment and materials in the Southeast Asia region for 2015 and 2016. With an emphasis on opening up new business opportunities and fostering stronger cross-regional engagement, SEMICON Southeast Asia will feature a tradeshow exhibition, networking events, market and technology seminars, and conferences.

Ng Kai Fai, President of SEMI Southeast Asia, said, “Southeast Asia is a vibrant and changing market for the semiconductor industry. For 2015 and 2016, SEMI estimates spending of almost US$ 5 billion on front-end and back-end equipment in the Southeast Asia region, and another $14 billion in spending on materials including $11 billion on packaging-related materials.  Southeast Asia has over 35 production fabs including Foundry, Compound Semiconductors, MEMS, Power, LED, and other devices. The region contributes a substantial 27 percent of global assembly, test and production, on top of being the largest market for assembly and test equipment,” he added.

More than 60 industry speakers and 200 companies will participate in SEMICON Southeast Asia, with thousands of attendees participating in the event. Attendees will learn the latest technology developments and strategies from industry leaders. SEMICON Southeast Asia connects decision makers from leading and emerging semiconductor companies with important industry stakeholders from both the region and all over the world.

Focusing on key trends and technologies in semiconductor design and manufacturing, the event also addresses expanding applications markets like mobile devices and other connected “Internet of Things” (IoT) technologies. Key enablers, such as specialised materials, packaging, and test technologies, as well as new architectures and processes, will be featured throughout the event. Highlights of SEMICON Southeast Asia include:

  • Market Trend Briefing — Features presentations from: EQUVO, Gartner, GFK Retail Technology , IC Insights, SMC Pneumatics (SEA), SEMI, and Yole Developpement
  • Assembly and Packaging Forum — “Emerging Packaging Solutions for Computing, Mobility and IoT Platforms” forum features presentations from: Advantest, AMD, ASE Group, Freescale Semiconductor, GLOBALFOUNDRIES Singapore, Intel, Infineon, Kulicke & Soffa, Lam Research, MediaTek, Tanaka Kikinzoku, and Yole Developpement
  • Product and System Test Forum — “Testing Strategy for a Fast-paced Semiconductor Market” with presentations from Advantest, ATMEL, GLOBALFOUNDRIES Singapore, Intel, Keysight Technologies, Silicon Labs International, UTAC Singapore, Xcerra

In addition, the event features forums on Technology Innovation, LED Technology, and Yield Productivity and Failure Analysis.

For more information and exhibition opportunities, visit www.semiconsea.org or register now.

Machine learning based advanced analytics for anomaly detection offers powerful techniques that can be used to achieve breakthroughs in yield and field defect rates.

BY ANIL GANDHI, PH. D. and JOY GANDHI, Qualicent Analytics, Inc., Santa Clara, CA

In the last few decades, the volume of data collected in semiconductor manufacturing has grown steadily. Today, with the rapid rise in the number of sensors in the fab, the industry is facing a huge torrent of data that presents major challenges for analysis. Data by itself isn’t useful; for it to be useful it must be converted into actionable information to drive improvements in factory performance and product quality. At the same time, product and process complexities have grown exponentially requiring new ways to analyze huge datasets with thousands of variables to discover patterns that are otherwise undetected by conventional means.

In other industries such as retail, finance, telecom and healthcare where big data analytics is becoming routine, there is widespread evidence of huge dollar savings from application of these techniques. These advanced analytics techniques have evolved through computer science to provide more powerful computing that complements conventional statistics. These techniques are revolutionizing the way we solve process and product problems in the semiconductor supply chain and throughout the product lifecycle. In this paper, we provide an overview of the application of these advanced analytics techniques towards solving yield issues and preventing field failures in semiconductors and electronics.

Advanced data analytics boosts prior methods in achieving breakthrough yields, zero defect and optimizing product and process performance. The techniques can be used as early as product development and all the way through high volume manufacturing. It provides a cost effective observational supplement to expensive DOEs. The techniques include machine learning algorithms that can handle hundreds to thousands of variables in big or small datasets. This capability is indispensable at advanced nodes with complex fab process technologies and product functionalities where defects become intractable.

Modeling target parameters

Machine learning based models provide a predictive model of targets such as yield and field defect rates as functions of process, PCM, sort or final test variables as predictors. In the development phase, the challenge is to eliminate major systematic defect mechanisms and optimize new processes or products to ensure high yields during production ramp. Machine learning algorithms reduce the number of variables from hundreds to thousands to the few key variables of importance; this reduction is just sufficient to allow nonlinear models to be built without over fitting. Using the model, a set of rules involving these key variables are derived. These rules provide the best operating conditions to achieve the target yield or defect rate. FIGURE 1 shows an example non-linear predictive model.

FIGURE 1. Predictive model example.

FIGURE 1. Predictive model example.

FIGURE 2 is another example of rules extracted from a model, showing that when all conditions of the rule are valid across the three predictors simultaneously, then this results in lower yield. Discovering this signal with standard regression techniques failed because of the influence of a large number of manufacturing variables. Each of these large number of variables has a small and negligible influence individually, however they all combine to create noise and thus masking the signal. Standard regression techniques, available in commercial software, therefore are unable to detect the signal in these instances and therefore are not of practical use for process control. So how do we discover the rules such as the ones shown in Fig. 2?

FIGURE 2. Individual parameters M, Q and T do not exert influence while collectively they create conditions that destroy yield. Machine learning methods help discover these conditions.

FIGURE 2. Individual parameters M, Q and T do not exert influence while collectively they create conditions that destroy yield. Machine learning methods help discover these conditions.

Rules discovery

Conventionally, a parametric hypothesis is made based on prior knowledge (process domain knowledge) and then the hypothesis is tested. For example to improve an etest metric such as threshold voltage one could start with a hypothesis that connects this backend parameter with RF power on an etch process in the frontend. However many times it is impossible to make a hypothesis based on domain knowledge because of the complexity of the processes and the variety of possible interactions, especially across several steps. So alternatively, a generalized model with cross terms is proposed and then significant coefficients are picked and the rest are discarded. This works if the number of variables is small but fails with large number of variables. With 1100 variables (a very conservative number for fabs) there are 221 million possible 3-way interactions, and 60 million 2-way cross terms on top of the linear coefficients!

Fitting these coefficients would require a number of samples or records that are clearly not available in the fab. Recognizing that most of the variables and interactions have no bearing on yield, we must then reduce the feature set size (i.e. number of predictors) within a healthy manageable limit (< 15) before we apply any model to it; several machine learning techniques based on derivatives of decision trees are available for feature reduction. Once the feature set is reduced then exact models are developed using a palette of techniques such as those based on advanced variants of piece-wise regression.

In essence, what we have described above is discovery of the hypothesis, while more traditionally one starts with a hypothesis…to be tested. The example in Fig. 2 had 1100 variables most of which had no influence, six of them have measurable influence (three of them are shown), all of these were hard to detect because of dimensional noise.

The above type of technique is part of a group of methods classified as supervised learning. In this type of machine learning, one defines the predictors and target variables and the technique finds the complex relationships or rules governing how the predictors influence the target. In the next example we include the use of unsupervised learning which allows us to discover clusters that reveal patterns and relationships between predictors which can then be connected to the target variables.

FIGURE 3. Solar manufacturing line conveyor, sampled at four points for colorimetry.

FIGURE 3. Solar manufacturing line conveyor, sampled at four points for colorimetry.

FIGURE 3 shows a solar manufacturing line with four panels moving on a conveyor. The end measure of interest that needed improvement was cell efficiency. Measurements are made at the anneal step for each panel as shown at locations 1, 2, 3, 4 in FIGURE 4. The ratio between measurement sites with respect to a key metric called Colorimetry, was discovered to important; the way this was discovered was by employing clustering algorithms, which are part unsupervised learning. This ratio was found in subsequent supervised model to influence PV solar efficiency as part of a 3-way interaction.

FIGURE 4: The ratios between 1, 2, 3, 4 colorimetry were found to have clusters and the clusters corresponded to date separation.

FIGURE 4: The ratios between 1, 2, 3, 4 colorimetry were found to have clusters and the clusters corresponded to date separation.

In this case, without the use of unsupervised machine learning methods, it would have been impossible to identify the ratio between two predictors as an important variable affecting the target because this relationship was not known and therefore no hypothesis could be made for testing it among the large number of metrics and associated statistics that were gathered. Further investigation led to DATE as the determining variable for the clusters.

Ultimately the goal was to create a model for cell efficiency. Feature reduction described earlier is performed followed by advanced piecewise regression and the resulting model based on 10 fold cross validation (build model on 80% of data and test against rest 20% and do this 10 times with a different random sample each time) results in a complex non-linear model with key element that includes a 3 way interaction as shown in FIGURE 5, where the dark green area represents the condition that drops the median efficiency by 30% from best case levels. This condition Colorimetry < 81, Date > X and N2 < 23.5 creates the exclusion zone that should be avoided to improve cell efficiency.

FIGURE 5. N2 (x-axis)  X represent the “bad” condition (dark green) where the median cell efficiency drops by 30% from best case levels.

FIGURE 5. N2 (x-axis) < 23.5, colorimetry < 81 and Date > X represent the “bad” condition (dark green) where the median cell efficiency drops by 30% from best case levels.

Advanced anomaly detection for zero defect

Throughout the production phase, process control and maverick part elimination are key to preventing failures in the field at early life and the rest of the device operating life. This is particularly crucial for automotive, medical device and aerospace applications where field failures can result in loss of life or injury and associated liability costs.

The challenge in screening potential field failures is that these are typically marginal parts that pass individual parameter specifications. With increased complexity and hundreds to thousands of variables, monitoring a handful of parameters individually is clearly insufficient. We present a novel machine learning-based approach that uses a composite parameter that includes the key variables of importance.

Conventional single parameter maverick part elimination relies on robust statistics for single parameter distributions. Each parameter control chart detects and eliminates the outliers but may eliminate good parts as well. Single parameter control charts are found to have high false alarm rates resulting in significant scrap rates of good material.

In this novel machine learning based method, the composite parameter uses a distance measure from the centroid in multidimensional space. Just as in single parameter SPC charts, data points that are farthest from the distribution that cross the limits are maverick and are eliminated. In that sense the implementation of this method is very similar to the conventional SPC charts, while the algorithm complexity is hidden from the user.

FIGURE 6. Comparison of single parameter control chart for the top parameter in the model and Composite Distance Control Chart. The composite distance method detected almost all field failures without sacrificing good parts whereas the top parameter alone is grossly insufficient.

FIGURE 6. Comparison of single parameter control chart for the top parameter in the model and Composite Distance Control Chart. The composite distance method detected almost all field failures without sacrificing good parts whereas the top parameter alone is grossly insufficient.

See FIGURE 6 for a comparison of the single parameter control chart of the top variable of importance versus the composite distance chart. TABLES 1 and 2 show the confusion matrix for these charts. With the single parameter approach, the topmost contributing parameter is able to detect 1 out of 7 field failures. We call this accuracy. However only one out of 21 declared fails is actually a fail – we call this purity of the fail class. Potentially more failures can be detected by lowering the limit somewhat, in the top chart however in that case the purity of the fail class which was already bad now balloons rapidly to unacceptable levels.

TABLE 1. Top Parameter

TABLE 1. Top Parameter

TABLE 2. Composite Parameter

TABLE 2. Composite Parameter

In the composite distance method, on the other hand 6 out of 7 fails are detected – good accuracy. The cost of this detection is also low (high purity) because 6 of 10 declared fails are actually field failures – which is a lot better than 1 out of 21 in the incumbent case and significantly better if the limit in the single top parameter chart was lowered even a little.

We emphasize 2 key advantages of this novel anomaly detection technique. First, the multi-variate nature enables detection of marginal parts that not only pass the specification limits for individual parameters but also are within distribution for all of the parameters taken individually. The composite distance successfully identifies marginal parts that fail in the field. Second, this method significantly reduces the false alarm risk compared to single parameter techniques. This leads to reduction of the cost associated with the “producer’s risk” or beta risk of rejecting good units. In short, better detection of maverick material at lower cost.

Summary and conclusion

Machine learning based advanced analytics for anomaly detection offers powerful techniques that can be used to achieve breakthroughs in yield and field defect rates. These techniques are able to crunch large data sets and hundreds to thousands of variables, overcoming a major limitation with conventional techniques. The two key methods that were explored in this paper key are as follows:

Discovery – This set of techniques provides a predictive model that contains the key variables of importance affecting target metrics such as yield or field defect levels. Rules discovery (a supervised learning technique) among many other methods that we employ, discovers rules that provide the best operating or process conditions to achieve the targets. Or alternatively it identifies exclusion zones that should be avoided to prevent loss of yield and performance. Discovery techniques can be used during early production phase when there is greatest need to eliminate major yield or defect mechanisms to protect the high volume ramp. And of course the techniques are equally applicable in high volume production.

Anomaly Detection – This method based on the unsupervised learning class of techniques, is an effective tool for maverick part elimination. The composite distance process control based on Quali- cent’s proprietary distance analysis method provides a cost effective way for preventing field failures. At leading semiconductor and electronics manufacturers, the method has predicted actual automotive field failures that occurred in top carmakers.

Supplier Hub answers the needs of a changing semiconductor industry. 

BY LUC VAN DEN HOVE, imec, Leuven, Belgium

Supplier HubOur semiconductor industry is a cyclical business, with regular ups and downs. But we have always successfully rebounded, with new technologies that have brought on the next generation of electronic products. Now however, the industry stands at an inflection point. Some of the challenges to introduce next generation technologies are larger than ever before. Overcoming this point will require, in our opinion, a tighter collaboration than ever. To accommodate that collaboration, we have set up a new Supplier Hub, a neutral platform where researchers, IC producers, and suppliers work on solutions for technical challenges. This collaboration will allow the industry to overcome the inflection point and to move on to the next cycle of success, driven by the many exciting application domains that appear on the horizon.

Call for a new collaboration model

The formulas for the industry’s success have changed. Device structures are pushing the limits of physics, making it challenging to continue progressing according to Moore’s Law. Intricate manufacturing requirements make process control ever more difficult. Also chip design is more complex than ever before, requiring more scrutiny, analysis and testing before manufacturing can even begin. And the cost of manufacturing equipment and setting up a fab has risen exponentially, shutting out many smaller companies and forcing equipment and material suppliers to merge.

In that context, more and more innovation is coming from the supplier community, both from equipment and material suppliers. But as processes are approaching some fundamental limits, such as material limits, chemical, physical limits, it is also for suppliers becoming more difficult to operate and develop next-generation process steps in an isolated way. An earlier and stronger interaction among suppliers is needed.

All this makes a central and neutral platform more important than ever. That insight and the requests we got from partners set imec on the path to organizing a supplier hub. A hub that is structured as a neutral, open innovation R&D platform, a platform for which we make a substantial part of our 300mm cleanroom floor space available, even extending our facilities. It is a platform where suppliers and manufacturers collaborate side-to- side with the researchers developing next-generation technology nodes.

Organizing the supplier hub is a logical evolution in the way we have always set up collaborations with and between companies that are involved in semiconductor manufacturing. Collaborations that have proven very successful in the previous decade and that have resulted in a number of key innovations.

Supplier Hub off to a promising start

Today, both in logic and in memory, we are developing solutions to enable 7nm and 5nm technology nodes. These will involve new materials, new transistor architectures, and ever shrinking dimensions of structures and layers. At imec, the bulk of scaling efforts like these used to be done in collaborative programs involving IDMs and foundries, but also the fabless and fablite companies. All of these programs were strongly supported by our partnerships with the supplier community.

But today, to work out the various innovations in process steps needed for future nodes, we simply need this stronger and more strategic engagement from the supplier community, involving experimenting on the latest tools, even if they are still under development. And vice-versa, the tool and material suppliers can no longer only develop tools based on specs documents. To fabricate their products successfully and on time, they need to develop and test in a real process flow, and be involved in the development of new device concepts, to be able to fabricate tools and design process steps that match the requirements of the new devices.

A case in point: it is no longer possible now to develop and asses the latest generation of advanced litho without matching materials and etch processes. And reversely, the other tool suppliers need the result of the latest litho developments. So today, all process steps have to be optimized concurrently with other process steps, integrating material innovations at the same time. And this is absolutely necessary for success.

So that’s where the Supplier Hub enters.

In 2013, imec announced an extended collaboration with ASML, involving the set up an advanced patterning center, which will grow to 100 engineers. In 2014, the new center was started as the cornerstone of the supplier hub. Mid 2014, Lam Research agreed to partake in the hub. And since then a growing number of suppliers has been joining, among them the big names in the industry. Some of more recent collaborations that we announced e.g. were Hitachi (CD-SEM metrology equipment) and SCREEN Semiconductor Solutions (cleaning and surface preparation tools).

End of 2014, ASML started installing its latest EUV-tool, the NXE:3300. In the meantime, we have initiated building a new cleanroom next to our existing 300mm infrastructure. The extra floor space will be needed to accommodate all the additional equipment that will come in in the frame of the tighter collaboration among suppliers. Finally, during our October 2014 Internal Partner Conference, we organized a first Supplier Collaboration Forum where the suppliers discussed and evaluated their projects with all partners, representing a large share of the semiconductor community.

We have also been expanding the supplier hub concept through a deeper involvement of material suppliers. These will prove a cornerstone of the hub, as many advances we need for scaling to the next nodes will be based on material innovations.

Enabling the Internet-Of-Everything

I hold great optimism for the industry. The last years, the success of mobile devices has fueled the demand for semiconductor-based products. These mobile applications will continue to stimulate data consumption, going from 4G to 5G as consumers clamor for greater data availability, immediacy, and access. Beyond the traditional computing and communications applications loom new markets, collectively called the ‘Internet of Everything.’

In addition, nanoelectronics will enable disruptive innovations in healthcare to monitor, measure, analyze, predict and prevent illnesses. Wearable devices have already proven themselves in encouraging healthier lifestyles. The industry’s challenge is now to ensure that the data delivered via personal devices meet medical quality standards. In that frame, our R&D efforts will continue to focus on ultra-low-power multi-sensor platforms.

While there are many facets to the inflection point puzzle, the answers of the industry begin to take shape. The cost of finding new solutions will keep on rising. Individual companies carry ever larger risks if their choices prove wrong. But through closer collabo- ration, companies can share that risk while developing solutions, exploring and creating new technologies, shorten times to market, and be ready to bring a new generation of products to a waiting world. The industry may indeed stand at an inflection point, but the future is bright. Innovation cannot be stifled. And collaboration remains the consensus of an industry focused on the next new thing. Today, IC does not just stand for Integrated Circuit, it indeed calls for Innovation and Collaboration.