Issue



Factory integration


09/01/2003







Solid State Technology asked industry automation professionals to discuss what is needed to resolve equipment-to-factory integration issues.

Let's consider API

Michael Feaster, VP of software engineering at Cimetrix Inc., Salt Lake City, Utah


Michael Feaster
Click here to enlarge image

Next-generation equipment, as well as new lab-to-fab equipment hitting fab production lines for the first time, pose a difficult challenge for integrators of new 300mm standards. It is not like the old days where you could simply understand SECS-I or HSMS, SECS-II, GEM, add a few state models for flow control, hammer out some general data variables and alarms, and then throw it over the wall to the fab's host integration team to make sense of your interface.

In the 300mm world, it is a given that you have done the above, and that your SECS-II and GEM package works as your GEM manual dictates and responds appropriately to requests from the host. There was a time when this was a major part of equipment-to-factory integration. Now it is a prerequisite.

The challenge is in the release of the 300mm Semi standards E39, E87, E90, E40, E94 and the new standards E120, E121, E116, E109, PR8-0303, 3510, 3507, 3569, 3571, and 3509. When you read the new standards, they can seem like islands to themselves. Pulling all released standards together and applying them to a tool is difficult without having someone experienced leading you through.

The standards body answer is to have more people attend the standards meetings and get engineers from each tool supplier to become experts. However, there are too many standards to keep up with, not to mention the costs implied and amount of time it takes. At least 50% of an engineer's time would be needed to keep up. This problem is made worse by the economic downturn. Tool suppliers are laying off software engineers, hoping to keep their domain engineers to maintain their core competencies. This is a fine strategy except that, generally, the software engineers understand the communication elements of their tool and how it applies to standards. This forces tool suppliers to delay investment in 300mm standards until they have an order. It also delays tool supplier shipments, which delays fab deployments.

Another problem is that the standards leave too much room for interpretation. One IC maker does not interpret them as another IC maker does. The 300mm standards are certainly much better than GEM, but there is still too much interpretation required from fab integrators and tool suppliers.

The problem with the new unapproved semiconductor standards is current network-based protocols rely on heavier clients, which require as much as 2000 bytes/.message as opposed to 64 bytes with SECS-II. This will require larger CPUs or multiple CPUs, more memory, more upfront investment in network infrastructure, and, perhaps, special proprietary compression routines.

If a required tool-side application programmers interface (API) and host-side API accompanied each standard, it would eliminate tool supplier involvement with the background behavior and changing technology required behind the API to keep up with changing capabilities (for example, creation and destruction of E39 objects, E39 to SECS conversion or any other protocol, state transitions, etc.). If the standard bodies focused on a clear abstraction between the tool-host API and the bits and bytes behind the tool-host API, then the knowledge required to match the bits and bytes could be worked on and even changed without the API abstraction changing. No one cares how the data get from point A to point B, but everyone cares about how his or her program interfaces to it.

As with many complex standards, working, adoptable source code can be supplied with each standard, including SECS/GEM. This would be an implementation guide for third-party vendors or a starting base for companies to use the standard.

An open source project could be created that applies standards in a fully working model supplied to members, further promoting understanding and acceptance. One only need look at the World Wide Web Consortium to see the fruits of such an effort.

The standards body could require third-party software vendors to adhere to the API, and do compliance testing. Microsoft has a model of how to qualify vendors. The standards implementation could be put in hardware as an integrated chip combined with a network interface card and hardware compression technology for the required bandwidth. This would eliminate the need for blue boxes and CPU and memory increases.

Aside from the pros and cons, any of these options would require less standards knowledge from tool suppliers and fab host integrators, increasing the likelihood of a tool working in the fab immediately, thereby improving deployment time.


The heir apparent to SECS

Mark Pendleton, principal member of the technical staff at Asyst Technologies Inc., Fremont, California

While SECS/GEM standards have served the industry well overall, they will not meet future equipment integration demands for two reasons: the need to serve multiple clients simultaneously and the need for wafer-level control.


Mark Pendleton
Click here to enlarge image

More specifically, SECS is not a discoverable interface. The factory automation system cannot query the SECS interface to determine its capabilities. No matter how sophisticated the factory automation system may be, the SECS standard requires a large manual integration effort for each tool.

The SECS cable is a point-to-point link. There is exactly one software process (i.e., one point) on the factory side of the cable. SECS does not reveal the structure of the tool by looking at SECS messages. This prevents truly generic factory systems for important tasks such as remote status display and diagnostics.

SECS also has no security mechanism. There is no concept of client authentication or access permissions in SECS. If we expand the potential for connection points, then we must have a security mechanism as well.

e-Diagnostics, APC, fault detection and classification (FDC), and other emerging applications will require the ability to establish concurrent multiclient access to equipment data independent of its online-offline state, and independent of the current ownership of equipment processing control.

Semi has worked with the industry to develop an approach that is focused on the tool and its components as entities, thereby providing a more accurate view of relationships between tools and data than is available with SECS/GEM. This work has resulted in definition of a family of object-based standards that are directly aligned with meeting emerging requirements for data access being defined by the e-factory.

Object-based standards are being developed to specifically provide methods to describe the composition and behavior of complex equipment in a generic way, and to support the needs of production and advanced technologies such as APC and predictive maintenance.

The goal of the relatively new object-based standards effort within Semi is to develop a replacement for GEM that is protocol-independent and that is based on the OSS object model paradigm. It will allow a host system to reference components of equipment, including current status (attributes), and elicit and observe behavior in the equipment through service calls. Interfaces to the host system and low-level sensor bus systems are defined. It will be necessary to develop an alternative solution to the current communication standards to meet the objectives of e-Diagnostics.

With the upcoming adoption of object-based technology standards by Semi, the semiconductor industry eventually will realize tighter process control, reduced labor overhead, and improved diagnostics and maintenance of critical assets.

Those who adopt these object-based standards (i.e., E120: The Common Equipment Model) as the primary enabler between the tool and the factory will have a significant advantage over those who don't. With the help of industry consortia, it is possible to put these standards on the fast track and quickly make the benefits of object-based equipment connectivity solutions available to the semiconductor industry at large.

The time to establish these object-based standards as the heir apparent to SECS/GEM has come. The industry has recognized this need in objective-, capability-, and requirements-oriented terms. Given that the SECS/GEM standards are deeply rooted, basic resistance to change must be overcome. The tide is starting to turn as IC makers begin to require these capabilities in new equipment purchase specifications with a focus on improving current 200mm fab efficiency. It is not a matter of "if," but "when."


Data quality is key

James Moyne, director of APC technology at Brooks Automation Inc., Chelmsford, Massachusetts

The move toward 300mm and total automation has heightened the importance of equipment-to-factory integration. One of the fundamental sticky points is the correctness of the information of the SECS/GEM link. We have matured to the point where we can successfully get SECS/GEM links operating; however, data communication over these links leaves a lot to be desired.


James Moyne
Click here to enlarge image

For example, critical data are often not available, not provided with sufficient context to be understood, or not reported in a timely fashion. Events are often not reported or are reported out of order. These types of problems are often addressed through preemptive testing of the SECS/GEM interface.

Preemptive testing of the SECS/GEM link should be considered a requirement. However, even with successful testing, there is another class of important problems in equipment integration. These focus on data quality necessary to support data-intensive activities such as APC. Increasingly, components of APC, such as run-to-run (R2R) control and FDC, are becoming an essential part of equipment integration. APC is often required to provide the process capability, throughput, and yield necessary to remain competitive.

However, APC is only as good as the data it receives, and the majority of this data comes from the equipment interface. Common problems with the equipment interface with respect to APC include: FDC data to monitor equipment health cannot be reported fast enough or collection of data impacts the speed of the equipment automation interface; the tool doesn't provide for actuation of individual process program parameters via the SECS interface to support R2R control; and the reported data are of sufficiently poor quality (lack of context, freshness, accuracy, or accuracy indication, etc.) so that APC algorithms cannot be effectively implemented.

The community has realized that the SECS/GEM interface is not well suited for APC data reporting. They have begun specifying a second tool "equipment data acquisition" interface that will eventually be utilized to support APC data collection. The emergence of this second interface will go a long way toward addressing this data quality issue; however, it is important to note that APC systems that use this data must always be robust so that they discriminate against bad data. For example, R2R control systems should be able to handle missing, out-of-order, or erroneous metrology data. FDC systems must have a configurable data collection system so that data collection can be matched to the processing power of the tool. Finally, data quality test solutions are needed to identify equipment data problems and limitations.

Equipment-to-factory integration is a protocol and a data issue. We have maturing testers to verify the protocol capability. We need to enhance these test capabilities and feed the lessons learned and best practices back into enhancement of integration standards. With respect to the data issue, we must also provide test capabilities to verify data quality of equipment interfaces. Test solutions for data quality should emerge over the next year. However, we must also make sure that our APC systems are robust enough to deal with data quality issues in an automated fashion.