Implementing a Particle Measurement System

Implementing a Particle Measurement System

Harris Semiconductor needed a better way of delivering particle data. Bringing its E-mail system on-line for delivery of data and alarms, Harris improved its cleanroom monitoring.

By Curtis Phillips, Richard C. Soper & Joseph D. Jenne

Imagine mounting a tachometer in the trunk of a car. No matter how precise the instrument is, it can`t overcome its lack of data delivery to the driver. Harris Semiconductor`s (Palm Bay, FL) cleanrooms had a similar problem with particle measurement data. Though the existing system of measuring particles was very accurate, the method of delivering particle data and alarms needed improvement. Through the interaction of several teams, a scenario of data delivery and alarms, using the company`s existing E-mail system, was conceived. A single mission team began the task of program generation and implementation. The job began where the output of the particle measurement system left off–at the ASCII output of the FMS-300 particle measurement equipment. The data was first filtered through a “C” program that created discrete files and then through a COBOL program that set alarm conditions. The information was ultimately delivered to the affected production areas via E-mail, which doubled as a return line for corrective actions. By using the new system, Harris moved the tachometer back in front of the driver and in the process, saved a lot of paper, review time and false alarms.

A Change was Needed

Harris Semiconductor has used the FMS-300 particle measurement system to monitor cleanrooms at the Palm Bay site for several years. The system allows sampling of multiple facilities within each of five different wafer and mask fabs. The Class 10 facility is monitored for 0.5 and 1.0 micron particle sizes from 36 remote monitors that are strategically placed in the fab. A dedicated computer reads each of the monitors every 15 minutes producing 24,192 readings per week for the Class 10 fab. In the past, each of the 36 sites produced an individual color paper graph (see Figure 1) with both particle sizes illustrated. Once the graphs were produced, it was the fab supervisor`s responsibility to gather them weekly. They were then reviewed and their individual averages compared against a standard. Area particle counts that failed to meet the standard required a review by the supervisor with the people in the affected zone. The review often turned into a memory test since the data was now many hours or days old. The slow data delivery time created a less-than-ideal environment for cleanroom improvements.

In addition to the delays caused by the distribution of paper graphs, another issue persisted. The presence of numerous short duration events or spikes tended to wash out more meaningful trend data. When the graphs were reviewed it was difficult to separate short duration spikes from normal readings. The company concluded that the spikes were mostly transient nonrepetitive events caused by activities such as an operator passing too close to a monitor. It was generally thought that these spikes should be removed from the review unless the readings remained high for some defined length of time. The longer duration events seemed to have a much greater chance of being identified and corrected. Trying to react to numerous spikes numbed the fab people by generating too many alarms. Therefore, Harris decided to create a filter for the spikes that would allow trends to be monitored while ignoring single-event phenomenon. The alarms would also be set to trip under the specification limit, providing an early warning system rather than a “post mortem bugle.”

Several opportunities for improvement were identified: establishment of “noise filters” to screen out particle spikes; rapid distribution of particle data to the people who could effect a corrective change; and setting E-mail alarms for correctable events only.

A cross-functional cleanroom team known as the “Dustbusters” had to decide where to set alarm conditions. Their charter was reviewing cleanroom practices and implementing improvements. They defined the conditions that compose a value-added alarm. The team concluded that an alarm should be set only if a particular site measured greater than nine 0.5-micron particles for three consecutive samples. This would reduce the unwanted “noise” output and cause an alarm only as a result of a persistent condition. Setting the alarm below the out-of-spec limit of 10 (0.5 micron) particles would provide an early warning system. The team further decided that they would revisit the alarm methodology should feedback indicate that it was too relaxed. (See Figure 2).

Once the program finds a record that exceeds the alarm criteria, it determines which area is affected and to whom an E-mail message will be sent. The E-mail message is then forwarded to the affected area personnel. The reception of the E-mail message also causes a beep at the fab terminals. The beep is accompanied by a message header telling the technicians that an alarm condition exists. The message details the date, time, affected area, particle reading and a short description of the affected location. The technicians are then required to take the specified actions to isolate the problem and implement a cure. The messages are also sent, in parallel, to supervisors and quality auditors for later review and follow up. Additionally, the data is archived as a permanent record and paper graphs may be generated “on demand” for big-picture analysis. This change alone saves about 3,000 sheets of paper per month or about $300 a year for all the Palm Bay sites.

Integrating the FMS-300 and the mainframes allowed the company to reduce the notification times from an average of 72 hours to a substantially improved nine minutes. The number of alarms have been reduced by 90 percent of the original amount. As a result of this and other significant cleaning actions taken by the fab, a reduction in the weekly grand average (the average of all the averages) of the Class 10 facility has dropped to 1/50th of its former average. n

Curtis Phillips, lead author, is a staff quality engineer who has worked at Harris Semiconductor for 15 years. He has been involved in the areas of statistical process controls, life testing, Class S (space) environmental testing and, most recently, as co-chairman of a Class 10 wafer fab particle team. He received his B.S./B.A. from Rollins College and has published articles on electrostatic discharge.

Richard C. Soper is the key user of the Workstream system, and has shared responsibility for the building management systems in the Facilities Operations Group at Harris Semiconductor in Palm Bay. He has a B.S. in chemical engineering and has worked for Harris Semiconductor for five years. He has published a paper on database design using WorkStream.

Joseph D. Jenne has been employed at Harris Semiconductor since 1983. He has held positions in both planning and information systems. His current position is senior systems analyst responsible for software development and system administration. He has an A.A. in computer science and is pursuing his B.S. at Florida Institute of Technology.

Click here to enlarge image

Figure 1. In the past, each of the 36 remote monitors at Harris Semiconductor`s Palm Bay, FL facility produced individual color paper graphs which had to be gathered and analyzed weekly by the fab supervisor.

Click here to enlarge image

Harris Semicondutor`s Palm Bay site fab improved the delivery of particle measurement data via its E-mail system. Harris reduced the number of alarms in its fab by 90 percent and also improved its notification time from up to 72 hours to nine minutes.

Click here to enlarge image

Figure 2. When should an alarm be sent through the E-mail system to alert an operator that a particle is contaminating the cleanroom? Harris developed this flow chart to provide a basis for sending alarm messages.

Click here to enlarge image

Figure 3. Block diagram of the FMS-300 reporting system. The “Dustbuster” team set alarm conditions at greater than nine 0.5-micron particles in three consecutive samples.

Processing Particle Data: The Challenges

Due to the FMS-300`s PC being a standalone system, the first task was to determine the most effective way to communicate from the system to the manufacturing community once an alarm occurred. Because several hardware configurations existed throughout the plant, it was obvious that the Simple Mail Transfer Protocol (SMTP) gateway would be required to allow communication from system to system. This allowed the company to focus on processing the particle data on one platform and utilize the mail gateway to deliver the actual alarm messages to the users. While the software programs were being conceptualized, the telecommunications department was connecting the particle system to a LAT port on the VAX cluster. The VAX was chosen as the main processing system because of its extensive computing power and the large number of manufacturing personnel already connected to it.

Once the connections were established, the next challenge was to capture the data on the LAT port as it came across the Ethernet connection. A VAX C program was developed to continuously monitor this port for particle data. To accomplish this, the program used VMS system services to perform I/O operations between the LAT port and a sequential output file. The file would then hold the particle data for additional processing. In this particular application, the SYS$QIOW, (Queue I/O Request and Wait), system service was used to provide synchronous I/O operations between the two devices. Required arguments to the $QIO service include the channel number assigned to the device, which must be established prior to the I/O request, and a function code. The function code indicates the specific operation being requested of the service. The IO$READVBLK, (Read Virtual Block), function was used to read the data into the program`s buffers. Once the $QIO system service request is successfully complete, the data is written to the output file. In order to avoid locking contentions between the VAX C program and the program that would actually process the records through a specific rule set, Harris decided to release the output file after 24 records had been stored. A new output file would then be opened by the program to store the next 24 records.

The second program, written in VAX COBOL, processes the released files. Each record contains a port ID number that identifies the data. As each record is processed, a statistics summary file is read to obtain the last alarm processed, the date and time of that alarm, and other specific information used to determine what actions need to be taken.

Glossary of Terms

ASCII : American Standard Code for Information Interchange–A standard code used for information interchange among data processing systems, communication systems and associated equipment.

Asynchronous–Involving a sequence of operations that begin only after the completion of a previous operation.

Buffer–An allocation of memory used to temporarily store information.

C–A low-level structured programming language developed by AT&T-Bell Laboratories.

COBOL: Common Business Oriented Language–A widely used business data processing programming language.

CPU –The central processing unit of a computer system that controls the interpretation and execution of instructions.

E-mail–The transmission, storage and distribution of text material in electronic form over communication networks.

FMS 300–Manufactured by Particle Measurement Systems Inc. (Boulder, CO).

I/O–Indicates an input or output operation.

LAT–A Local Area Transport network protocol.

PORT–Electronic circuitry that provides a connection point between the CPU and input/output devices.

System Services–A utility that allows execution of operating system commands.

VAX: Virtual Address Extension–Digital Equipment Corp. computer architecture.

VMS: Virtual Memory System–The VAX operating system.

POST A COMMENT

Easily post a comment below using your Linkedin, Twitter, Google or Facebook account. Comments won't automatically be posted to your social media accounts unless you select to share.