Issue



Data collection and networking capabilities enable pump predictive diagnostics


07/01/2005







Predictive diagnostics techniques are applied to networked vacuum systems in wafer-processing applications using an array of data collection, mining, and advanced analysis concepts. By utilizing data collection and networking capabilities of modern pumping systems, the goal is to screen out normal variations in process operating conditions while automatically identifying early warning signs of real problems building up in vacuum pumps - before failures occur.

Vacuum pumping in semiconductor manufacturing has progressed significantly in the last 10 years. Gone are the huge control boxes of relay switches, which used to be mounted on the wall. Instead, chamber vacuum and abatement systems are under the control of microprocessors, inverter drives, factory networks, and sophisticated software. With improved access to pump dynamics, suppliers are able to give the process tool owner a great deal of information about the status of pumping systems.

A pump failure is unusual, but avoiding an unpredicted failure can bring massive productivity benefits. Tool users want to know if a vacuum pump is running outside its normal operating parameters long before a failure occurs. Growing pressure on profit margins has lead IC manufacturers to increase their focus on improving process yields, tool uptime, and wafer throughput. At the same time, the ramp-up of 300mm tools and introduction of new material technologies and cleaning flows constantly raise the by-product challenge and increase the value of wafers.

Determining pump performance is relatively straightforward for some slow-changing parameters such as temperature, but in the past accurate diagnosis has not always been possible for some of the more esoteric conditions found in pumping applications, such as powder buildup or exhaust blockage, which can result in a dramatic failure.

By utilizing the data collection and networking capabilities of modern pumping systems, it is possible to generate a database of pump parameters against time. This has allowed experienced engineers to identify pumps that are at risk and take corrective action. However, this approach is often constrained by the amount of time that engineers have to analyze data.

Advanced data analysis - such as surge and dip identification, data mining, and system modeling - is now being developed and used to identify the multivariable signatures of emerging pump failures. The solutions are in the ability to detect potential problems against the normal variations in process operating conditions that vacuum pumps experience. Once detected, the urgency of response can be easily determined.

Evolution of failure prediction

Failure prediction is based around data collected from pumps. Up until recently, the scope of our predictive diagnostics activities has included: onboard pump warning systems; countdowns to scheduled service; and historical expert visual analysis.

While these methods have proved fruitful in the past, there are limitations. Onboard pump warnings, for example, are applied to single-sensor parameters (such as temperature) and do not account for the interaction between variables. Countdowns-to-service techniques are generally based on the average life expectancy of a pump or planned service intervals. As a result, it is difficult to account for fluctuations in fab line utilization and specific processing techniques, which can vary widely from plant to plant, even inside the same company. Historical expert visual analysis - the most successful of all methods - incorporates the graphical inspection of key pump parameters through time. By manually monitoring the magnitude, frequency, and correlation of events between variables, an experienced engineer is able to assess a pump’s status. Normal operation of pumps can be seen from cyclical data patterns that reflect semiconductor process runs with no excessively high peaks (Fig. 1). However, data patterns showing high, volatile, and frequent peak events - which cannot be theoretically verified - represent pumps that require maintenance because they are under stress (Fig. 2).


Figure 1. Data indicating normal operation.
Click here to enlarge image

The use of expert visual analysis is often limited by the availability of experienced engineers, who may be unable to continually monitor and analyze data from all pumps in a fab. Thus, much of the expert analysis must be retrospective in nature, resulting in the loss of valuable response time when problems are finally detected. There also is a practical limit to the number of parameters, patterns, and trends that can be comprehended by expert engineers. Each individual uses a largely subjective diagnostic technique, which results in a lack of standardization between human experts.


Figure 2. Data patterns showing time-to-failure.
Click here to enlarge image

To address constraints in expert visual analysis, an analysis application has been developed by BOC Edwards for its pump-network database system to predict the onset of failures in real time using automated models. Called FabWorks32, the current version of the network database system will contact service engineers with decision-making alerts that can be received by computers, telephones, pagers, or other wireless devices when particular pumps are experiencing signs of stress.

Data mining

Data mining refers to the identification of critical patterns of decision-making knowledge held within large bodies of data. Data mining techniques have made predictive diagnostics capability possible. There are four separate phases to any data-mining project:

  1. Data collection - data warehousing
  2. Data preprocessing - data cleaning and transformation
  3. Data mining/modeling - use of analytical/learning algorithms to develop a predictive model in two stages: a training phase to develop predictive models; and a testing phase to test predictive models using unseen, or new data
  4. System implementation of models - tuned to each user’s requirements

Data collection

A variety of sensors have been incorporated in pumping systems for many years. Key sensor parameters cover power levels, temperature, pressure, and flow rates. All key parameters vary continuously and interact to influence time-to-failure in pumps. Fortunately, existing sensing technology has proved sufficient to develop current models. Future data mining and analysis capabilities will optimize the use and development of sensors in upcoming vacuum pumping systems.

In the FabWorks32 system, data from a pump’s onboard controller is entered into the central database using configurable logging profiles. To maximize network utilization and minimize constraints, BOC Edwards has developed an approach to collect and log data while optimizing network bandwidth through “delta logging,” in which parameters are only delivered to the database if they change by a predefined amount. Field experience suggests delta logging can result in a five-to-tenfold increase in bandwidth optimization by only transmitting parameter readings that change significantly. This improvement can be used to boost the frequency of data updates, increase the number of parameter inputs/pump, or add more pumps to the same network. Generally, the optimized network bandwidth is used to achieve a higher number of pumps and/or parameter readings. Analysis has revealed that the change limits for key variables are low enough to ensure that no information is lost for real-time diagnosis.

With the permission of fab tool owners, performance data from dry pumps is compiled into a “data warehouse” under process conditions. This warehouse can be defined as a collection of data gathered and organized so information can be easily extracted, analyzed, and mined to further understand the collected data. Data collection takes place continually throughout the life of a pump.

Due to the range of pump variants, specific processes, and pump failure modes, effective data collection and management is crucial to data mining. As new products and process chemistries emerge, the collection of suitable modeling data should be viewed as an ongoing collaborative effort between the pump manufacturer and wafer fab.

Data preprocessing

Data preprocessing refers to the cleaning, transformation, and characterization of data in preparation for the modeling process. Much of the data contained within failed cases reflect normal running conditions and are redundant; therefore, it is important that information significant to the failure is captured.

Parameter event properties, their sequence, and their frequency of occurrence are key to this approach to preprocessing, as is the use of complex sequential-based mathematics. Techniques include Fourier analysis, principal components analysis, and clustering.

Predictive models

Modeling is the process of creating predictive models from adequate failure data. Modeling is carried out in two distinct phases: training and testing. In the training phase, models are derived from suitable failure data. Our training phase combines knowledge extracted from experienced engineers that is supplemented with automated algorithms using state-of-the-art software. Modeling techniques currently being explored include rule induction and artificial neural networks.

A state-based technology is employed to accurately represent the pump’s status and state of failure. The sequence of events leading to failure is characterized into different rules that have an impact on the state of the pump. Five output states are used, increasing in severity from “normal” to “stop processing.” The five states are: 1) normal; 2) predictive monitoring started; 3) recommend preventive maintenance (PM) in 48 hr; 4) do not commence processing; and 5) stop processing.

Different rule sets have been derived for a selection of known failure modes (for example, deposition of condensable and ingestion of powder by-products). Both failure modes are applicable to harsh semiconductor processes such as LPCVD, PECVD, and SACVD. The goal is to maximize the impact from predictive diagnostics by initially targeting the processes most prone to failure and wafer loss.


Figure 3. Software operation taken from a PECVD process.
Click here to enlarge image

Figure 3 shows trending data for key parameters prior to an actual failure and alerts that would have been generated. In this example, a “recommend PM in 48 hr” warning would have been scheduled 4 days prior to failure. A major alarm to “stop processing” would have been issued 3 hr prior to failure. In the test phase, a model’s effectiveness is validated by testing them with unseen, equivalent failure data. The predictive rules also are tested and refined when new failure data is available. When a pump fails, the unit is stripped down and analyzed to categorize the failure mode. Data from the failure mode is then inserted into the data warehouse in preparation for modeling. As modeling requires consistent training and testing data, rules are developed using pumps of equivalent types, processes, and failure modes.

To account for variability in the event sequence for different failure modes, BOC Edwards has created a testing and configuration environment that allows quick updates and development of models. Configured rules are tested over a number of pumps to determine if the rules are overly conservative (swapping pumps out too early) or overly optimistic (running pumps too close to failure). Thus, many different model types can be applied to match up with user expectations and requirements. Figure 4 shows the cyclical nature of data mining and modeling.


Figure 4. Model feedback and update cycle for support of predictive diagnostics in pumps.
Click here to enlarge image

Predictive models are not restricted to predicting pump failure. Exhaust blockage is common and can occur in pump exhaust or connected pipework. Traditional pressure-detection methods provide little warning of blockage onset. A suitable model was created that employs a selection of variables to predict impending exhaust overload.

System implementation

System implementation combines data collection, preprocessing, and predictive models to generate decision-making alerts. A pump can suffer from deposition, ingestion, and exhaust blockage. Applying models simultaneously can account for the maximum number of conditions. New models can be incrementally added.

To complement data mining, a framework has been set up to ensure standard pump exchange procedures are followed when alerts are generated. Effectively communicating their meaning and consequent procedures to service personnel allows a focused response that benefits data mining. Predictive model updates are created in the shortest time possible, as new configurations are distributed and applied in a plug-and-play fashion in FabWorks32.

Conclusion

Predictive diagnostics and data mining techniques may significantly reduce costs by eliminating unexpected vacuum pump failures that could result in scrapped wafers and unplanned downtimes in fab lines. Predictive diagnostics also optimizes pump-service exchange intervals through a better indication of actual pump conditions, reducing maintenance costs. With the necessary framework and systems, networked systems can move toward predicting the residual process life of vacuum equipment. This work has demonstrated techniques that utilize advanced data analysis of information coming from existing sensors in real time, independent of any process tool data.

Michael Mooney is a data mining specialist at BOC Edwards in England.

Gerald Shelley is a product manager at BOC Edwards; ph 800/848-9800, e-mail [email protected].