Search
Menu
Cognex Corp. - Smart Sensor 3-24 GIF LB

Composite Pattern-Matching Algorithms Identify All the Good Parts

Facebook X LinkedIn Email
DAVID J. MICHAEL, COGNEX CORP.

In many industries, manual inspection is being replaced by machine vision inspection technology for its higher speed and greater accuracy, which improves product quality and reduces production costs. The most critical step in many machine vision applications involves searching for a particular object within the camera’s field of view (registration) and determining how closely the object matches an ideal image of the object (inspection). Conventional pattern matching works by training a pattern based on the features found in a representative image of the part. In some applications, however, good parts may contain substantial variations, so that distinguishing good parts from bad is difficult.

Above is a package of sugar-free, cherry-flavored gelatin trained with conventional pattern matching. The highlighted area is used as the reference image.

Figure 1. Above is a package of sugar-free, cherry-flavored gelatin trained with conventional pattern matching. The highlighted area is used as the reference image. Photo courtesy of Cognex Corp.



Variations among good parts are often called noise; variations that make good parts different from bad parts are called signal. Until now, setting up a vision inspection for certain applications to distinguish between signal and noise was a considerable manual effort. For example, the part of the image that contains irrelevant details must be masked. But a recent advance in machine vision inspection involves the use of intelligent composite pattern-matching algorithms that learn how to distinguish between the signal and the noise simply by training on an assortment of parts. The new pattern-matching algorithms could substantially reduce setup time, and improve the accuracy of difficult registration and inspection operations with a high level of variability among good parts.

The areas highlighted in green here match the reference image.

Figure 2.
The areas highlighted in green here match the reference image. Photo courtesy of Cognex Corp.

Conventional pattern matching

Traditional pattern-matching technology relies upon a pixel-grid analysis process commonly known as normalized correlation. This method looks for statistical similarity between a gray-level model — or reference image — of an object and portions of the image to determine the object’s X/Y position. Though effective in certain situations, this approach limits both the ability to find objects and the accuracy with which they can be found, under conditions of varying appearance common to production lines: changes in object angle, size and nonlinear shading. About a decade ago, geometric pattern-matching algorithms were introduced and offered the ability to learn an object’s geometry by using a set of boundary curves that are not tied to a pixel grid and then looking for similar shapes in the image without relying on specific gray levels. The pattern-matching algorithm reports all instances of this pattern, and provides its location, angle of rotation, scaling in X and Y directions and how well the found pattern matches the model image. The result is an improvement in the ability to accurately find objects despite changes in angle, size and nonlinear shading. Some applications may have such substantial variation in the appearance of good parts that it may be difficult to acquire a representative part image for all possible good parts. Examples include parts with features that vary among good parts, parts that appear on different backgrounds and parts whose image is affected by noise, clutter and occlusion that are not a cause for rejection. Attempting to train a conventional pattern in these situations often produces an unusable pattern, because the pattern includes numerous features not present in other run-time part images. These applications are difficult to address with conventional pattern-matching tools and often require a considerable setup effort.

An orange flavor label shows a poor match to the reference image.

Figure 3.
An orange flavor label shows a poor match to the reference image. Photo courtesy of Cognex Corp.

One possible approach is to manually mask the representative image, eliminating areas that vary among good parts. Another possible approach is to acquire multiple representative images, each corresponding to a particular type of “good.” The pattern-matching operation is then performed multiple times, once for each of the representative images. However, this approach requires extra memory, and cycle time is typically much longer than when using a single representative image.

A strawberry flavor label also shows a poor match to the reference image.

Figure 4.
A strawberry flavor label also shows a poor match to the reference image. Photo courtesy of Cognex Corp.

Figures 1-4 highlight a limitation of single-model geometric pattern matching. The item to be inspected is a family of labels for various flavors of a food product. In this case, the area highlighted in the cherry flavor label in Figure 1 is used as the reference image. Figure 2 shows another cherry flavor label registered with this reference image. The areas highlighted in green in Figure 2 are those that match the reference image. Nearly every area of the cherry flavor label matches the reference image, so this label would achieve a very high score and would almost certainly pass an inspection.

A composite model was created by training on cherry, orange and strawberry labels.

Figure 5.
A composite model was created by training on cherry, orange and strawberry labels. Photo courtesy of Cognex Corp.

Figure 3 shows an orange flavor label inspected with the same reference image. In this case, the areas that the cherry and orange flavor have in common are highlighted in green, indicating a match: the phrases “Great Value” and “Sugar Free Low Calorie,” and the weight and calorie count. On the other hand, the word “Orange” and the picture of the orange are highlighted in red, indicating that they do not match the reference image. The orange label and the strawberry label both generate a relatively poor match, even though the goal of an inspection is to pass labels of all three flavors that meet printing standards.

This cherry label is registered as a perfect match.

Figure 6.
This cherry label is registered as a perfect match. Photo courtesy of Cognex Corp.

New composite pattern-matching algorithms

Machine vision performance and accuracy on applications such as these can often be dramatically improved with a new generation of composite pattern-matching tools. These new tools train on multiple images to automate the process of identifying the characteristics of the image that are critical in distinguishing good parts from bad, as opposed to those that vary among good parts. The self-learning algorithm collects the common features from each image and unites them into a single ideal model. This approach filters out noise or other random errors from the training images that would otherwise appear in the final composite model. The new self-learning pattern-matching tools can be used anywhere that conventional pattern-matching tools are used today.

Deposition Sciences Inc. - Difficult Coatings - MR-8/23

When an orange label is registered with a composite pattern-matching algorithm, the result is a better match than a conventional pattern-matching algorithm.

Figure 7.
When an orange label is registered with a composite pattern-matching algorithm, the result is a better match than a conventional pattern-matching algorithm. Photo courtesy of Cognex Corp.

A typical example is checking the alignment of a chip on a bioanalyzer with its mount. The chip is inscribed with circles that match up with openings in the mount. If the chip is misaligned, the circles will look oval instead of round. The pattern-matching vision tool identifies chips that have shapes that are circles within a certain tolerance as good parts and those with ovals as bad parts. Figure 5 shows the composite model that was created by training on the cherry, orange and strawberry flavor labels. The pattern-matching algorithm separates the signal (the areas of the label that are the same for all three flavors) from the noise (the areas of the label that are different for the different flavors). The pattern-matching algorithm then ignores the noise when processing an image during the inspection process, as shown in Figures 6-8. These figures show that the areas of the label that change with the flavor — highlighted red in the previous set of images — now have no highlight, meaning that these areas are not considered when scoring the image during the registration or inspection. On the other hand, the areas of the label that are common for all flavors are still highlighted in green. This substantially improves the accuracy of the registration or inspection process without requiring the additional effort of manually masking the areas of the image that are not relevant to the vision application.

A strawberry label also yields a better match than a conventional pattern-matching algorithm.

Figure 8.
A strawberry label also yields a better match than a conventional pattern-matching algorithm. Photo courtesy of Cognex Corp.

Composite multiple-model pattern matching

In applications with multiple discrete appearance types, such as the multiple-flavor dessert label, inspection performance can be further improved through the use of composite multiple-model pattern matching. Composite multiple-model pattern matching works in the same way as conventional composite pattern matching, in that the algorithm is trained on multiple images and automatically separates the signal from the noise. The difference in multiple-model pattern matching is that the algorithm creates a model for each of the discrete appearance types found in the images used for training. Then, when the inspection is run, the pattern-matching algorithm returns a registration or inspection result based on the model that produces the best result. The higher accuracy provided by multiple-model composite pattern matching is shown in Figures 9-11. Note that areas specific to the individual flavors, such as the type that indicates the flavor, are now highlighted in green in the inspection results. The multiple composite pattern-matching algorithm inspects the entire image and detects any problems with the flavor text.

Composite multiple-model pattern matching provides a perfect match on a cherry flavor label.

Figure 9.
Composite multiple-model pattern matching provides a perfect match on a cherry flavor label. Photo courtesy of Cognex Corp.

Individual conventional pattern-matching or composite pattern-matching models could be created for each appearance type, but in most cases, use of the composite multiple-model pattern matching is much more efficient. With conventional pattern-matching or composite pattern-matching models, feature extraction would have to occur for each model used in the registration or inspection. On the other hand, the multiple-model composite pattern-matching algorithm carries out feature extraction only once per run-time image. These features are then used as input for each of the models to be run. This approach reduces both the cycle time and the amount of memory required for the vision application.

Composite multiple-model pattern matching provides a perfect match on an orange flavor label.

Figure 10.
Composite multiple-model pattern matching provides a perfect match on an orange flavor label. Photo courtesy of Cognex Corp.

Applications and hardware

Potential applications for composite pattern matching can be found wherever similar inspection operations are needed on a variety of objects with a basic similarity but with widely varying features. In these applications, composite pattern matching can drastically reduce the time required to set up the vision application. One of the most obvious applications is in the consumer-products industry, where labels for many different product variants must be inspected for the same basic defects. Figure 12 shows an electronics-industry application. But many different industries build products in a wide range of sizes, styles and colors, with features added or subtracted. Composite pattern matching has the potential to save time in many of these applications.

Composite multiple-model pattern matching provides a perfect match on a strawberry flavor label.

Figure 11.
Composite multiple-model pattern matching provides a perfect match on a strawberry flavor label. Photo courtesy of Cognex Corp.

Next-generation composite pattern-matching algorithms are typically implemented on smart-camera-based vision systems controlled by internal microprocessors so they can operate independently of a PC. Vision systems are less expensive to implement because they typically can be developed without writing a line of code, thanks to prewritten functions called vision tools. Operators can adjust the focus or lighting on the vision system either by plugging in a laptop or by operating the vision system in teach mode. This smart-camera solution provides a lower cost of ownership because the vision operates independently of the computer operating system, is inherently much more stable over time and is not subject to computer obsolescence issues.

Composite model training from multiple degraded images.

Figure 12.
Composite model training from multiple degraded images. Photo courtesy of Cognex Corp.

Meet the author

Dr. David J. Michael is the director of Core Vision Tool Development at Cognex Corp. in Natick, Mass; email: [email protected].



Published: January 2016
Glossary
machine vision
Machine vision, also known as computer vision or computer sight, refers to the technology that enables machines, typically computers, to interpret and understand visual information from the world, much like the human visual system. It involves the development and application of algorithms and systems that allow machines to acquire, process, analyze, and make decisions based on visual data. Key aspects of machine vision include: Image acquisition: Machine vision systems use various...
Featuresmachine visionindustrialDavid J. MichaelCognex Corp.machine vision inspectioncomposite pattern-matching composite multiple-model pattern matching

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.