Accurate machine vision in challenging conditions
Finding patterns and reporting their location quickly and accurately is essential in today's high-precision electronics and semiconductor manufacturing environments.
By Gary Wagner
|
Machine vision has evolved to become a mainstream automation tool, enabling computers to replace human vision in high-speed and high-precision manufacturing applications. Today, machine vision is being used to automate processing and ensure quality in manufacturing everything from diapers to the most advanced computer chips.
Two industries that have pushed the envelope of machine vision technology are semiconductors and electronics. In these segments, machine vision tools are used to precisely guide a variety of robotic handling, assembly and inspection processes. The most significant challenge for machine vision in these automated applications is maintaining the ability to locate reference patterns despite changes in material appearance. Normal process variations can produce a number of unpredictable conditions, including contrast reversal and intensity gradients, angular uncertainties, blur caused by changes in depth of field, scale changes, and partial obliteration or missing features.
The traditional approach to machine vision has been normalized grayscale correlation – a pattern-matching technique that compares the shading between an image being inspected and an image on which the vision system is trained. For many years, correlation was the algorithm of choice for most machine vision system applications. It was reliable, easy to implement and relatively undemanding for earlier generations of computing technology.
Process Variations
Unfortunately, most grayscale correlation tools have difficulty coping with changes in appearance after being trained on a particular pattern. While traditional correlation tools are adequate for locating patterns under ideal conditions, they exhibit low tolerance to image changes in scale, angle, blur, obliteration and contrast variation.
With the demands of advanced manufacturing processes, there are new and challenging conditions that correlation cannot overcome. For example, normalized grayscale correlation would be heavily confused by nonlinear contrast changes, where the grayscale value changes unpredictably on part of an image. Imagine a gray bar where the outline stays the same, but the inside turns from dark to light. Because a correlation system would attempt to match a pattern that it learned purely on grayscale values, it has a built-in problem with this type of intensity gradient or nonlinear contrast change.
The traditional approach using normalized grayscale correlation is to work around, or simply avoid, manufacturing processes that would create optical problems with which the vision system would be unable to cope. For example, if the inspection process produced specular reflection – bright patches where light reflects back toward the viewer – users probably would have worked around the problem by changing the lighting, adding special filters or repositioning the machine in relation to the camera. In today's environment, these steps could put constraints on the manufacturing process that would very likely be unacceptable. In the semiconductor and electronics industries, there is often no feasible workaround when optical conditions are less than ideal.
Training on the Edges
Another way to approach these vision system challenges is with geometric pattern matching. Rather than evaluating grayscale patterns within an image, this technique trains on the edges. It then fits the edges to a geometric model, which it uses to detect corresponding features – and the target object – in an incoming image.
Vision challenges, such as contrast reversal and intensity gradient, that would be problematic to a correlation-based system are no longer an issue because geometric pattern matching is not keyed to grayscale values, only to the position of edges. The system would not train on a gray bar, for example, but on the fact that an object is a bar in the first place. It would then search for the presence of the bar in new images. Even if the bar is a different shade, the geometry remains the same. It may no longer be gray, which would result in poor correlation, but the structure (the pattern and edges) are still there.
Geometric edge detection is actually not new – MIT developed the algorithm in the 1950s. However, it saw little commercial use for the technology because the amount of computing power required was too expensive and complex to install on a manufacturing floor. The technique remained relatively dormant for several decades as the limitations of correlation became increasingly apparent for emerging challenges in semiconductor and electronic applications. Today, with the greater power of off-the-shelf processors, geometric pattern matching has become both technically and economically feasible (Figure 1).
What's in a Score?
Both grayscale correlation and geometric pattern matching produce a score that ranks the closeness of the match. However, some scores give more useful measurements than others. Correlation provides simply a “best guess” that does not necessarily indicate a valid match.
To illustrate, the correlation system is trained on a black square and examines a blank wall with a black square in the middle. Correlation will be poor in the white wall, and good in the middle, where the system returns a peak value of where the black square was best correlated with the new scene. But what if the object was a black circle? It would still be far better correlated with the black square than the white wall. The score won't be perfect, but will be relatively high. But what does it mean? Unfortunately, one simply can't know. Perhaps the score wasn't perfect because of some optical anomaly – or maybe it wasn't the object you were looking for.
There's a lot of guesswork in scores based on correlation. One could even train on a map on the wall, then examine a window shade and possibly get a score of 30. The conventional wisdom is to settle on a specific value – such as 70 or better – and accept scores that are higher and reject those that are lower. The problem is that even with passing scores, there's no sure way to know if a match has been made, or if the “match” is just something that resembles the desired object.
Returning Meaningful Results
Geometric edge detection would approach the problem differently, by measuring how well the edges of the square that the system was trained on match up to the edges of the circle. It can also return a number of sub-scores that provide specific details about the image and its contents. These include X-Y location, rotation from the trained reference image, contrast variance, geographic score (how well the edges matched up) and percentage of conformance (how many of edges were found). By analyzing the structure, the vision system can provide much more concrete information about whether an object is present than the best-guess attempts provided by grayscale correlation.
For example, the vision system is trained to detect a cross, and it examines a new image that has one of its arms nipped off (Figure 2). The system may return an overall score of 75 to indicate that most of the object is detected. However, other measurements could be performed that might be more significant depending on the application – for example, the system could detect that three edges fit perfectly but one edge is missing. It's up to the user to choose which scores are of interest – the user may only be interested in whether the overall score is above or below a specified value or may be interested in more specific information that could be helpful.
Real-world Examples
Geometric edge matching is well-suited for a wide variety of operations in the semiconductor industry where there is a need to precisely align wafers or die so that activities, such as lithography, cutting, and placing or bonding, can be performed to extremely tight tolerances (Figure 3). One example is pick-and-place, where a robot picks up parts from a tray or waffle pack. The robot moves to where the parts are supposed to be, snaps a picture, then calculates the location with great precision to pick up the component and position it on a printed circuit assembly. In a more challenging scenario, components could be scattered about a shaker table with their orientation and location unknown. Edge detection tools would be able to locate the parts, then rotate them as needed. In contrast, correlation would be stymied by the rotated patterns.
Figure 4. Actual display from a machine vision system console showing the different tools used to solve the application. |
Another real world example can be seen in chemical/mechanical polishing, a chip fabrication process that involves polishing a wafer to remove surface debris. As the top of the wafer is smoothed away, features become smaller and closer together, like a canyon closing in toward the bottom. Scale differences as a result of this process can create problems for correlation attempting to match grayscale values. However, a vision system that uses edge detection can still identify the structure and edges based on the geometric model on which it was trained (Figure 4).
Adapting to the Process
The emergence of edge detection vision tools does not mean the imminent demise of normalized grayscale correlation. There will be situations in which grayscale correlation will remain feasible or even optimal, such as when examining closely similar images that are not susceptible to major variations. However, with its greater accuracy under adverse conditions, geometric edge detection is becoming the pattern-matching technique of choice for complex and challenging applications.
Companies are working to expand the capabilities of geometric edge detection, both in terms of accuracy and also in the range of information available to intelligently evaluate an image. As industries implement new manufacturing processes with increasingly variable conditions, machine vision tools will become more adaptive and precise, even when presented with less-than-perfect images.
GARY WAGNER, president, can be contacted at Imaging Technology Inc., 55 Middlesex Turnpike, Bedford, MA 01730; 781-275-2700; Fax: 781-275-9590; E-mail: [email protected].