Author Archives: admin

GOURI SANKAR KAR and ARNAUD FURNEMONT, imec, Leuven, Belgium

Every day, even every second, we produce massive amounts of data. With an estimated annual data growth rate of 1.2 to 1.4x (source: IDC’s Data Age 2025 study, March 2017), the amount of digital data produced in the world will soon exceed 100 zettabyte. To grasp the meaning of this number – we would need to fill a soccer field with terabyte solid state drives (SSDs) stacked 28 meter high if we would want to store all this data. The data are partly generated through well-known applications such as Amazon, YouTube, Facebook or Netflix. But emerging IoT applications will make a significant contribution as well. Examples are the autonomous car (accounting for 4,000GB of data per day), the smart building (>275GB per day, per building) and the smart city (>1000TB per day, per city). Huge amounts of bandwidth are required to transport all this data – from the application to an edge node, then to a base station, and then to a data center – a challenge that is tackled by 5G and optical fiber technologies. Throughout this data flow, stringent requirements will be imposed on memory and storage – in terms of density, bandwidth, cost and energy.

Clever data mining, and reduced energy consumption

At some point in the flow of data transport, the generated data will need to be analyzed and converted into knowledge and wisdom by means of machine learning techniques. The exact point at which this will happen, will significantly impact the requirements on memory and storage. For example, if machine learning can be applied just after data generation, it can help relax the requirements. If, on the other hand, data is turned into wisdom later in the process, more raw data will need to be stored throughout the whole process.

The zettabyte era will also challenge the power that is consumed by the growing amount of data centers, for processing, transporting and storing all the data. If we don’t optimize the energy consumption for these operations, data centers worldwide may use almost 8000 terawatt-hours by 2030 (source: https://www.labs.hpe.com/next-next/energy). That’s about the amount of electricity consumed by Europe, Africa and part of Asia today. In recent years, several technologies have been introduced in the data centers to address power and performance issues for storage, including, for example, the wide deployment of solid state drives since 2014, and the introduction of the first emerging memory technologies in 2017. But to be prepared for the zettabyte era, we will have to introduce novel non-volatile memories with unprecedented density and speed, and improved power consumption.

The slowdown of today’s memory roadmap

Let’s have a closer look at today’s memory landscape (FIGURE 1). Close to the central processing unit (CPU), fast, volatile embedded static random access memories (SRAMs) are the dominant memories. Also on chip are the higher cache memories, mostly made in SRAM or embedded dynamic random access memory (DRAM) technologies. Off-chip, further away from the CPU, you will mainly find DRAM chips for the working memory, non-volatile Flash NAND memory chips for storage, and tapes for long-term archival applications. In general, memories located further away from the CPU are cheaper, slower, denser and less volatile.

FIGURE 1. Application and performance space of the existing conventional memories (HPC = high performance computing; DSP = digital signal processing; ROM = read only memory; duty cycle = the proportion of time during which the memory device is operated).

For half a century, Moore’s Law has driven cost improvement of memory technologies, and this has translated into a continuous increase of the memory density. However, despite large improvements in memory density, only storage density (Flash NAND devices and tapes) has truly kept pace with the data growth rate. With the transition from NAND to 3D-NAND devices, density improvement for this storage class is however expected to slow down as well, and go below the data growth rate soon.

Unlike in logic development – which has always been driven by improvements in cost and device physics – improving the power/performing benefits for memory has barely been taken care of. As a result, energy reduction and speed improvement are far from following up the data growth rate, for both memory and storage devices (FIGURE 2).

FIGURE 2. Growth rates for the main memory roadmaps; (red) only storage density has kept pace with the data growth rate.

Emerging technologies to the rescue?

To meet the memory requirements of the zettabyte era (i.e., improved density and speed, and reduced energy consumption), imec is exploring multiple emerging memory options, for standalone as well as for embedded applications (FIGURE 3). Options range from MRAM technologies for cache level applications, new ways for improving DRAM devices, emerging storage class memories to fill the gap between DRAM and NAND technologies, solutions for improving 3D-NAND storage devices, and a revolutionary solution for archival type of applications. Below, we review their status and challenges, and investigate whether these emerging memory roadmaps can cope with the zettabyte world.

FIGURE 3. Memory research landscape at imec – application and performance space.

MRAM technologies for embedded cache level applications

Spin transfer torque MRAM (STT-MRAM) technology has emerged as a candidate technology for replacing L3 cache embedded SRAM memories. It offers non-volatility, high density, high speed and low switching current. The core element of an STT-MRAM device is a magnetic tunnel junction in which a thin dielectric layer is sandwiched between a magnetic fixed layer and a magnetic free layer. Writing of the memory cell is performed by switching the magnetization for the free magnetic layer, by means of a current that is injected perpendicular into the magnetic tunnel junction.

Because of this geometry, the read and write operations are performed through the same path, and this challenges the reliability of the device. These reliability issues in combination with increased energy at sub-ns switching speeds make STT-MRAM memories unsuitable for replacing the faster L1/L2 cache SRAM memories.

An MRAM variant, the spin orbit torque MRAM (SOT-MRAM), can overcome these issues. In these devices, switching the free magnetic layer is done by injecting an in-plane current in an adjacent SOT layer, as such de-coupling the read and write path and improving the device endurance and stability. Imec recently demonstrated the ability to fabricate state-of-the-art SOT-MRAM devices on 300mm wafers using CMOS compatible processes (FIGURE 4). The devices exhibited an unlimited endurance (>5×1010), fast switching speed (240ps), and power consumption as low as 300pJ. We also explore ways for further reducing energy consumption, by bringing down switching current and demonstrating field-free switching.

FIGURE 4. (Left) SOT-MRAM device, and (right) SOT switching distribution as a function of pulse voltage for various pulse lengths.

An imec view on DRAM scaling

DRAM is structurally a very simple type of memory. A DRAM memory cell consists of one transistor and one capacitor, that can be either charged or discharged. Traditionally, double-sided cylindrical capacitor structures have been used, with a dielectric material contained inside as well as outside the capacitor structure. However, when spacing becomes smaller, we need to increase the aspect ratio of these structures – since a certain amount of capacitor area is needed to maintain the memory’s performance. For very large aspect ratios, these DRAM structures are fabricated at the limits of mechanical stability. The industry may therefore transition to a new capacitor architecture: the high-aspect ratio one-pillar architecture, with the dielectric film now only on the outside (FIGURE 5). This change may make it possible to use thicker films of higher-k dielectrics, which in turn will allow reducing the aspect ratio of the one-pillar structure and improving leakage. A new dielectric with these specifications is currently under development at imec.

FIGURE 5. An imec view on the DRAM scaling roadmap; inset: impact of the one-pillar capacitor architecture

Further down the road, we are investigating if we can place the peripherical logic directly under the array of capacitors and transistors. This logic circuitry controls how data is moved to and from the memory chip, and typically consumes considerable area. Today, the transistor of the DRAM memory cell is however built on silicon. To be able to move the peri logic underneath the DRAM array, we need to replace this transistor with a non-Si transistor that is back-end compatible. At imec, we are moving towards a thin-film indium gallium zinc oxide (IGZO)-based transistor (FIGURE 6). This architecture is expected to give us one generation of Moore’s law scaling. In addition, it will enable ultimate 3D DRAM integration as well.

FIGURE 6. Novel oxide semiconductor based DRAM cell transistor.

Emerging memory and selector concepts for storage class memory

Storage class memory has been introduced to fill the gap between DRAM and NAND Flash memories in terms of latency, density, cost and performance. This new memory class should allow massive amounts of data to be accessed in a very short time. Most probably, more than one novel memory technology will be required to span the entire gap. The imec team explores several emerging technologies for storage class memory, including various cross-point-based architectures for the memory element, such as phase-change-RAM (PC-RAM), vacancy-modulated conductive oxide (VMCO), conductive bridging RAM (CB-RAM) and oxide RAM (OxRAM). When it comes to high-density applications, all these memory elements require two-terminal selector elements that connect serially with each of the memory elements. These selector elements suppress the sneak currents that run through the unselected cells in the cross-point array during memory operation. Imec is developing GeSe-based ovonic threshold switching (OTS) selector devices that fulfill the requirements imposed by high-density storage class memories, including high thermal and electrical stability, high current density and low off-state current.

Close to the DRAM side of the DRAM-NAND gap, high-density, high-speed MRAM offers an interesting, scalable alternative for the memory element. As this technology requires a selector as well, imec is working on a diode-based selector for this type of device. And finally, closer to the NAND Flash side, a ferroelectric memory based on hafnium-oxide (HfO) has gained interest. Compared to NAND-Flash, this emerging memory can operate at lower voltages and is much faster (with speeds down to 100ns). The easy cell structure can be fabricated with CMOS compatible processes and can be built in a 3D architecture.

3D NAND… and beyond?

Since its introduction several years ago, 3D NAND has become a mainstream storage technology because of its ability to significantly increase bit density. This is enabled by transitioning from 3 bits per cell to 4 bits per cell. And, instead of traditional x-y scaling in a horizontal plane, 3D NAND scales in the z direction by stacking multiple layers of NAND gates vertically. Today, stacking over 60 layers has become possible. However, increasing the numbers of layers challenges the deposition and etch processes. Also, the more layers that are stacked, the more stress evolves inside the layers, and this can cause a collapse of the 3D NAND pattern – a challenge that is tackled by imec. Imec also investigates alternative channel materials and processes – such as a silicon macaroni channel – that can overcome the limitations of the traditionally used poly-Si channels.

Despite all these advances, the density improvement of 3D NAND is expected to slow down and go below data growth rate soon. Therefore, the search for an emerging memory technology that is faster and cheaper than 3D NAND is ongoing. So far, there are no good candidates out there that can beat the 3D NAND density, especially because of the outstanding capability of 3D NAND Flash technology to integrate 3-4 bits per cell.

DNA storage: the holy grail of archival storage?

Imagine that we could store all data of the world in a container the size of a car, and store them for a very long time? That is exactly what DNA storage promises. DNA can be kept stable for millions of years – today, it is still possible to extract DNA from the woolly mammoth – guaranteeing long term retention. DNA as a medium for storage is also extremely dense and compact. Writing can be performed by encoding binary data onto strands of DNA through the process of DNA synthesis. The DNA strand can be built up with the base pairs representing a specific letter sequence, through a series of deprotection and protection reactions. As from the read side, there is an enormous technology push to sequence DNA faster and faster and at lower cost. Progress in DNA sequencing has been amazing, even outpacing Moore’s law. But researchers still have a long way to go before reasonable targets (1Gb/s) can be reached. To realize this, faster fluidics, faster chemical reactions and much higher parallelism are needed than what’s possible today. At imec, we work towards faster write/read operation, and towards making DNA storage a cost-effective solution for long-term storage.

Towards a sustainable zettabyte era

It has become clear that the classical memory roadmap cannot handle the zettabyte world in terms of energy, density, speed and cost. As shown above, imec is working on several emerging memory and storage technologies that can largely improve on density, system performance, and, partly, speed. However, energy consumption remains the biggest challenge towards a truly sustainable zettabyte era. And this highlights the need to continue collaborating with academia and industry on reduced energy consumption for memory technologies.

Talking about sustainability brings another aspect of the zettabyte era to mind: recycling. To be able to process and store all the data, massive amounts of devices will be produced. The advent of emerging technologies will also bring in new materials, which today are hardly recycled. To enable a truly sustainable zettabyte era, the semiconductor industry should therefore also find ways to improve the recyclability of all these materials.

GOURI SANKAR KAR is distinguished member technical staff emerging memories, and ARNAUD FURNEMONT is memory director at imec, Leuven, Belgium.

Editor’s Note: This article was originally published in the October 2018 issue of Solid State Technology. 

Tom Ho and Stewart Chalmers, BISTel, Santa Clara, CA

Traditional FDC systems collect data from production equipment, summarize it, and compare it to control limits that were previously set up by engineers. Software alarms are triggered when any of the summarized data fall outside of the control limits. While this method has been effective and widely deployed, it does create a few challenges for the engineers:

  • The use of summary data means that (1) subtle changes in the process may not be noticed and (2) the unmonitored section of the process will be overlooked by a typical FDC system. These subtle changes or the missed anomalies in unmonitored section may result in critical problems.
  • Modeling control limits for fault detection is a manual process, prone to human error and process drift. With hundreds of thousandssensors in a complex manufacturing process, the task of modeling control limits is extremely time consuming and requires a deep understanding of the particular manufacturing process on the part of the engineer. Non-optimized control limits result in misdetection: false alarms or missed alarms.
  • As equipment ages, processes change. Meticulously set control limit ranges must be adjusted, requiring engineers to constantly monitor equipment and sensor data to avoid false alarms or missed real alarm.

Full sensor trace detection

A new approach, Dynamic Fault Detection (DFD) was developed to address the shortcomings of traditional FDC systems and save both production time and engineer time. DFD takes advantage of the full trace from each and every sensor to detect any issues during a manufacturing process. By analyzing each trace in its entirety, and running them through intelligent software, the system is able to comprehensively identify potential issues and errors as they occur. As the Adaptive Intelligence behind Dynamic Fault Detection learns each unique production environment, it will be able to identify process anomalies in real time without the need for manual adjustment from engineers. Great savings can be realized by early detection, increased engineer productivity, and containment of malfunctions.

DFD’s strength is its ability to analyze full trace data. As shown in FIGURE 1, there are many subtle details on a trace, such as spikes, shifts, and ramp rate changes, which are typically ignored or go undetected by a traditional FDC systems, because they only examine a segment of the trace- summary data. By analyzing the full trace using DFD, these details can easily be identified to provide a more thorough analysis than ever before.

Dynamic referencing

Unlike traditional FDC deployments, DFD does not require control limit modeling. The novel solution adapts machine learning techniques to take advantage of neighboring traces as references, so control limits are dynamically defined in real time.  Not only does this substantially reduce set up and deployment time of a fault detection system, it also eliminates the need for an engineer to continuously maintain the model. Since the analysis is done in real time, the model evolves and adapts to any process shifts as new reference traces are added.

DFD has multiple reference configurations available for engineers to choose from to fine tune detection accuracy. For example, DFD can 1) use traces within a wafer lot as reference, 2) use traces from the last N wafers as reference, 3) use “golden” traces as reference, or 4) a combination of the above.

As more sensors are added to the Internet of Things network of a production plant, DFD can integrate their data into its decision-making process.

Optimized alarming

Thousands of process alarms inundate engineers each day, only a small percentage of which are valid. In today’s FDC systems, one of the main causes for false alarms is improperly configured Statistical Process Control (SPC) limits. Also, typical FDC may generate one alarm for each limit violation resulting in many alarms for each wafer process. DFD implementations require no control limits, greatly reducing the potential for false alarms.  In addition, DFD is designed to only issues one alarm per wafer, further streamlining the alarming system and providing better focus for the engineers.

Dynamic fault detection use cases

The following examples illustrate actual use cases to show the benefits of utilizing DFD for fault detection.

Use case #1End Point Abnormal Etching

In this example, both the upper and lower control limits in SPC were not set at the optimum levels, preventing the traditional FDC system from detecting several abnormally etched wafers (FIGURE 2).  No SPC alarms were issued to notify the engineer.

On the other hand, DFD full trace comparison easily detects the abnormality by comparing to neighboring traces (FIGURE 3).  This was accomplished without having to set up any control limits.

Use case #2 – Resist Bake Plate Temperature

The SPC chart in Figure 4 clearly shows that the Resist bake plate temperature pattern changed significantly; however, since the temperature range during the process never exceeded the control limits, SPC did not issue any alarms.

When the same parameter was analyzed using DFD, the temperature profile abnormality was easily identified, and the software notified an engineer (FIGURE 5).

Use case #3 – Full Trace Coverage

Engineers select only a segment of sensor trace data to monitor because setting up SPC limits is so arduous. In this specific case, the SPC system was set up to monitor only the He_Flow parameter in recipe step 3 and step 4.  Since no unusual events occurred during those steps in the process, no SPC alarms were triggered.

However, in that same production run, a DFD alarm was issued for one of the wafers. Upon examination of the trace summary chart shown in FIGURE 6, it is clear that while the parameter behaved normally during recipe step 3 and step 4, there was a noticeable issue from one of the wafers during recipe step 1 and step 2.  The trace in red represents the offending trace versus the rest of the (normal) population in blue. DFD full trace analysis caught the abnormality.

Use case #4 – DFD Alarm Accuracy

When setting up SPC limits in a conventional FDC system, the method of calculation taken by an engineer can yield vastly different results. In this example, the engineer used multiple SPC approaches to monitor parameter Match_LoadCap in an etcher. When the control limits were set using Standard Deviation (FIGURE 7), a large number of false alarms were triggered.   On the other hand, zero alarms were triggered using the Mean approach (FIGURE 8).

Using DFD full trace detection eliminates the discrepancy between calculation methods. In the above example, DFD was able to identify an issue with one of the wafers in recipe step 3 and trigger only one alarm.

Dynamic fault detection scope of use

DFD is designed to be used in production environments of many types, ranging from semiconductor manufacturing to automotive plants and everything in between. As long as the manufacturing equipment being monitored generates systematic and consistent trace patterns, such as gas flow, temperature, pressure, power etc., proper referencing can be established by the Adaptive Intelligence (AI) to identify abnormalities. Sensor traces from Process of Record (POR) runs may be used as starting references.

Conclusion

The DFD solution reduces risk in manufacturing by protecting against events that impact yield.  It also provides engineers with an innovative new tool that addresses several limitations of today’s traditional FDC systems.  As shown in TABLE 1, the solution greatly reduces the time required for deployment and maintenance, while providing a more thorough and accurate detection of issues.

 

TABLE 1
  FDC

(Per Recipe/Tool Type)

DFD

(Per Recipe/Tool Type)

FDC model creation 1 – 2 weeks < 1 day
FDC model validation and fine tuning 2 – 3 weeks < 1 week
Model Maintenance Ongoing Minimal
Typical Alarm Rate 100-500/chamber-day < 50/chamber-day
% Coverage of Number of Sensors 50-60% 100% as default
Trace Segment Coverage 20-40% 100%
Adaptive to Systematic Behavior Changes No Yes

TOM HO is President of BISTel America where he leads global product engineer and development efforts for BISTel.  [email protected].   STEWART CHALMERS is President & CEO of Hill + Kincaid, a technical marketing firm. [email protected].

Editor’s Note: This article was originally published in the October 2018 issue of Solid State Technology. 

PRASAD BACHIRAJU, Rudolph Technologies, Inc., Wilmington, Mass.

As IC manufacturers adopt advanced packaging processes and heterogeneous, multi-chip integration schemes to feed ever-greater consumer demand for more computing power in smaller and smaller spaces, their supply chains have become increasingly complex. A fab-wide view of the process, not so long ago the holy grail of yield management systems, now seems quaintly inadequate. Critical processes that affect finished product yields now occur in different facilities running diverse processes at locations spread around the world. A packaged electronics module may combine microprocessors, memory, MEMS sensors and RF communications, all from different fabs and each with its own particular history. Optimizing yields from such a complicated supply chain requires access to individual component genealogy that includes detailed knowledge of process events, equipment malfunctions and operational parameters, and much more. The latest generation of yield optimization systems consolidates this data – tool deep and supply-chain wide – in a monolithic database, providing end-to-end, die-level traceability, from bare wafer to final module test. Specialized algorithms, designed for “big data” but based on intimate knowledge of semiconductor manufacturing processes, can find hidden correlations among parameters, events and conditions that guide engineers to the root causes of yield losses and ultimately deliver increased fab productivity, higher process yields, and more reliable products.

Proactive Yield Perfection

Proactive yield perfection (PYP) is a comprehensive systems-level approach to perfecting yield through detailed surveillance and sophisticated modeling that identifies actual or potential root causes of excursions and establishes monitoring mechanisms to anticipate and proactively address problems before they result in yield loss. PYP is a logical extension of long-standing yield management practices that comprehends the dispersal and growing complexity of the manufacturing process and combines information from conventional defect detection, yield analysis, automated process control, and fault detection and classification techniques. It addresses two major obstacles identified within the industry: providing access to data across the fab and supply chain and integrating that data into device-level genealogy. PYP’s ability to correct small problems early is increasingly valuable in a complex supply chain where each step represents a considerable additional investment and a flawed finished module results in the costly loss of multiple component devices.

PYP collects data in a single database that integrates vital product parameter data from every die at each step in the process with performance and condition data from all tools, factories and providers in the supply chain. It provides unprecedented visibility of the entire manufacturing process from design, through wafer fab, test, assembly and packaging. Consolidating the data in a single, internally consistent database eliminates the considerable time often consumed in simply locating and aligning data stored in disparate databases at multiple facilities, and allows analytical routines to find correlations among widely separated observations that are otherwise invisible (FIGURE 1).

FIGURE 1. The PYP ecosystem

Genealogy incorporates traditional  device-level traceability of every die, but also provides access to all information available from sensors on the tool (temperature, pressure), process events (e.g. lot-to-lot changes and queue times), equipment events (e.g. alarms and preventive maintenance), changes in process configurations (e.g. specifications and recipes), and any other event or condition captured in the database for sophisticated analysis. Ready access to device genealogy allows analysts to trace back from failed devices to find commonalities that identify root causes, and trace forward from causes and events to find other device at risk of failure.

Mobile communications

A leading global supplier of mobile communications products has implemented the full PYP suite across their supply chain (FIGURE 2), which includes three separately located front-end fabs, a fabless design facility, oursourced manufacturing (foundries), and outsourced assembly and test (OSAT). Finished modules integrate data processing, data storage, RF communications, power management, analog sensing, and other functions using multiple die and components fabricated at various facilities around the world. The PYP suite comprises fault detection and classification, yield management, defect detection and classification, and run-to-run automated process control, including configurable dashboard displays that provide interactive drill-down reports and scheduled user-definable reports. The process flow includes front-end wafer-based processing, with singulated die then transferred to tape and subsequently to rectangular panels for back-end processing (FIGURE 3).

FIGURE 2. Numerous locations worldwide feed information into a single big-data cluster where multiple software solutions operate on the pre-aligned and internally consistent data set.

FIGURE 3. In a typical process flow, die move from wafer (front-end and back-end wafer fab), to reel, to panel, to final test and finally to the customer.

In one instance (FIGURE 4), panel mounted die were failing in back-end processing, causing yield losses valued at hundreds of thousands of dollars. The failed die were traced back to over a thousand wafers, and analysis of those wafers, using spatial pattern recognition, revealed defect clusters near the wafer edges. Further analysis of integrated MES (wafer process history) data traced the clusters to a defective tool that leaked etching solution onto wafers. Engineers are currently evaluating sensor data from the tool in an effort to identify a signal that will permit proactive intervention to prevent similar losses in the future.

FIGURE 4. A potential loss of hundreds of thousands of dollars due to final test on panel (left) was traced back to over a thousand wafers. Through yield analysis, an SPR pattern signaled wafers with edge defect clusters (center). Finally, the root cause is understood through MES analysis (right).

In another case, failed die on panels in the back-end showed a characteristic “strip” signature (FIGURE 5). Tracing the die backward revealed a front-end a process issue with the failing die all originating from locations near the center of the wafer.

FIGURE 5. The bad die (red strips) on the panel (left) were traced back the centers of the original wafers.

A final example from this manufacturer occurred in the back-end where die containing various filter technologies fabricated on a wafers of different sizes are combined on a panel substrate (FIGURE 6). In this case, the customer gained additional insights regarding the origin of the die being assembled and was able to evaluate how shifts in performance parameters impacted the final module product. This resulted in better matching of parts in the pre-assembly process and a tighter distribution of performance parameters in the outgoing modules.

FIGURE 6. A single panel contains different filter components from multiple (differently-sized) wafers. Die-level traceability allows engineers to relate final test results to inspection and metrology data collected during wafer processing. Die can be traced back to their individual location on the original wafer, permitting associations with location-specific data such as defect patterns discovered by SPR.

Automotive electronics

A leading global supplier of electronic systems to the automotive market manufactures finished modules containing multiple ICs from various suppliers and facilities. Highest reliability with component failure rates at parts-per-billion levels is an absolute requirement because the health and safety of millions of drivers may be jeopardized by a defective product. Limiting these risks, and the associated financial liability, through fast root-cause analysis of in-house test failures and field returns and rapid identification of process drift or step function changes are critical needs. Tuning the process to improve yields while preserving critical reliability is an equally important economic concern.

The final product is a multi-chip module containing microelectromechanical system (MEMS)-based sensors from one supplier and application-specific integrated circuits (ASIC) from another. These component parts are functionally tested prior to being send to an assembly facility where they are attached to a common carrier and packaged in a sealed module, which is then retested to verify functionality of the completed assembly. PYP software collects data across the entire supply chain. Devices from a single wafer lot are ultimately split and mixed among many modules. Using a commonality analysis, engineers can quickly identify die that share a similar risk and track them to their ultimate dispositions in finished modules. The tracking is not limited to wafer level. For instance, it might be used to find only those die located on the straight-line extension of a known crack or within at-risk regions identified by spatial pattern recognition (SPR). Such information is critical in issuing a recall of at-risk parts.

In this case, die that were known-good at wafer test were failing after assembly. Tracing back to wafer level the customer determined that all affected modules were assembled within a short time period of one another. Further investigation found that the packaging process was affecting peak-to-peak voltage at final test. The customer was not able to modify the assembly process, but they were able to eliminate the final test losses by tuning wafer probe specifications to eliminate die at risk for damage in the assembly operation. Die-level traceability across the supply chain, which allowed engineers to quickly and easily compare data sets on the same dies from wafer probe and final test, was key to achieving this solution.

Conclusion

Increasingly dispersed and complex supply chains require a proactive, integrated, systems-level approach to optimizing yields. PYP’s ability to integrate data – sensor-deep and supply chain-wide – in a monolithic database streamlines analysis and finds relationships that are otherwise invisible. Die-level genealogy allows engineers to trace die histories backward to find root causes of failures and forward to identify other die similarly at-risk. The value of PYP-based solutions is multiplied by the substantial investments made at each step of the process and the high cost and potential financial liability associated with failed, multi-chip modules.

PRASAD BACHIRAJU is Director, Customer Solutions, Rudolph Technologies, Inc., Wilmington, Mass.

Editor’s Note: This article was originally published in the October 2018 issue of Solid State Technology. 

test


October 20, 2015

test

August 14th at 1:00 p.m. ET

Wet Processing, including wafer cleaning, is one of the most common yet most critical processing step, since it can have a huge impact on the success of the subsequent process step. Not only does it involve the removal of organic and metal contaminants, but it must leave the surface in a desired state (hydrophilic or hydrophobic, for example), with minimal roughness and minimal surface loss – all on a growing list of different types of materials. In this webcast, experts will identify industry challenges and possible solutions, including a new concept of tailoring chemistries to dissolve very small particles rather than physically removing them.

Register here: https://event.webcasts.com/starthere.jsp?ei=1040703

Speakers:

Reidy_R-cropDr. Rick Reidy is a professor in the Department of Materials Science and Engineering at the University of North Texas (UNT). He is a member of the SEMATECH Surface Preparation and Cleaning Conference, Organizing Committee, the International Technology Roadmap for Semiconductors (ITRS) Front End Processes- Surface Preparation Technical Working Group and a co-chair of the Interconnect Surface Preparation Focus Team. Dr. Reidy’s research interests are supercritical processing of semiconductor materials, synthesis and characterization of novel porous ceramics for dielectric, sensor and energy applications, and ultra-low k dielectrics.

Abbas Rastegar_SEMATECH[2]Dr. Abbas Rastegar, is a Fellow in Technical Strategy Group at SEMATECH.  Abbas joined SEMATECH in 2003 where he managed cleans program in the Lithography. Since 2013, he managed  R&D in nanodefectivity in SEMATECH. Abbas has about 25 years of R&D experience in different physics, macromolecules, nano-science, and nano-technology disciplines with more than 150 published papers in technical journals and conferences.

 

Sponsored By

l_Dynaloy_300

SSEC

June 19, 2014 at 12:00 p.m. EST

Although Micro-Electro-Mechanical Systems (MEMS) have been around for a long time, the introduction of the technology into consumer markets, with Nintendo’s Wii in late 2006, opened the floodgates with multiple MEMS – accelerometers, gyros, compasses, pressure sensors and microphones — now in smartphones and tablets. And you ain’t seen nothing, yet!

Register here: https://event.webcasts.com/starthere.jsp?ei=1037134

Jay-face-shotPresenter: Jay Esfandyari, Director of MEMS and Analog Product Marketing at STMicroelectronics

 

 

SimoneSeveri

Presenter: Simone Severi, lead for SiGe MEMS at imec

SiGe MEMS technology for monolithic integration on CMOS

Recent development of MEMS technologies at IMEC enables the monolithic integration of MEMS on CMOS. This approach represents a potential key advantage for a variety of MEMS systems as it can lead to a device performance improvement and to the scaling of the system area with consequent cost and package size reduction. Systems requiring multiple MEMS devices on chip or large MEMS array will benefit the most out of this approach.

One route focuses on the direct post processing on top of CMOS of a low temperature SiGe material, fully compatible with standard Al back end of line processes. This platform can realize a compact system for multi sensing applications. Accelerometers, capacitive pressure, compass and temperature sensors are among the candidate sensors to be combined on the same chip, e.g., to yield implantable (or wearable) products for the medical field, chips for the consumer or automotive market. Array of Capacitive Micromachined Ultrasonic Transducers are demonstrated for potential medical imaging systems. Key asset for the success of this CMOS-MEMS monolithic approach is the implementation of an hermetic thin film packaging technology. Thin film hermetic SiGe packages are demonstrated with cavity pressure ranging from few Pa up to several 100mBar. This technology has the potential to enable a scaling of the form factors while reducing packaging and testing costs.

Sponsored by:

SSEC

AELogo_Clr_1

wrs_logo

WordPress › Error

There has been a critical error on this website.

Learn more about debugging in WordPress.