Dr. Benjamin M. Dawson, Coreco Imaging Inc.
Proper lighting is an important part of any machine vision system. Obviously, the lighting needs to be adequate, but it also should be arranged so that it amplifies the things you want the vision system to see while attenuating those you don't. Therefore, besides supplying photons, lighting can provide optical preprocessing for the machine vision system to reduce and simplify its computations, making it faster and more robust.
In some machine vision applications, the environment is such that you can tightly control the lighting. For example, to find dust particles on an unprocessed semiconductor wafer, you can use low-angle (dark field) illumination. The angle and pattern of the lights are carefully set, and the inspection area is shielded from ambient light. The lighting can be a critical competitive advantage for the inspection machine maker.
In most applications, however, the environment allows some, but not complete, control of the lighting. When machine vision is installed on an existing production line, options are often limited as to where the lights can be placed and how to block ambient light. For example, Coreco Imaging’s vision systems inspect the glue patterns that secure automotive trunk liners. The machines that apply the glue are open to the operator, so ambient light can’t be avoided.
Some machine vision applications must use available light, so the lighting can’t be controlled. Examples include face identification, tracking people in public spaces, vehicle navigation, and robotic stacking and packing of canned goods in a warehouse. The industry is still learning how to make machine vision systems work reliably in available light.
When ambient light can’t be avoided, you can improve inspection reliability by raising the “signal” level provided by your lighting system against the “noise” level from ambient light. A common technique is to make the lighting a lot brighter than the ambient and to stop down the camera (see figure). Other methods include using LED illumination with a matched bandpass filter over the camera lens or using polarized light.
Most of the effects of ambient illumination are overcome by using intense lighting.
Images from both controlled and, shall we say, semiambient situations usually have intensity gradients and nonuniformities that are impossible or too expensive to remove with better lighting. To operate effectively, the machine vision system has to ignore or compensate for this “noise.”
Edges and surface markings, such as bar codes and scratches, are important features for most machine vision inspections. For example, edges are used to find parts in an image and to check for cracks. Machine vision systems often employ edge detectors — local, differential operators that detect sudden intensity changes — to find these features in the image. Because these operators are local, they ignore slowly changing intensity gradients. Because they are differential, they ignore small changes in the overall lighting intensity.
Algorithms that use edge detectors can overcome some effects of imperfect lighting. As examples, edge detectors are used in fast and accurate search for an object’s position in an image and in optical character recognition or bar-code reading when there are strong lighting gradients.
If you want to measure a material’s reflection or transmission, the vision system has to remove the effects of imperfect lighting (and other factors) to get accurate measurements. To measure the coating on a plastic film, you can use a line of light behind the film and view the transmitted light with a line-scan camera. Pinholes — areas where the coating is very thin or missing — appear as bright spots. Changes in coating thickness (density) appear as gradual changes in intensity. An edge detector won’t work in this application because these gradual changes should be measured, not ignored.
Unfortunately, nonuniformities in the lighting, lens and camera invalidate the density measurements. Better lighting might not be possible and doesn’t help with the other nonuniformities. Instead, the vision system records gain and offset values for each pixel in the line-scan sensor. These values are used as a per-pixel linear correction that compensates for most nonuniformities.
A similar situation occurs in particle analysis using a microscope. An intensity threshold is used to segment the particles from the background, and blob analysis is used to count and characterize particles. The nonuniformities confound the threshold, so that particles appear to change size as they move around the image. To compensate for lighting and other variations, the vision system applies a gain and offset correction for each pixel in the image before the threshold is applied. An image coprocessor often makes these corrections because the images can be large.
In summary, proper lighting makes machine vision possible or easier. There are many vendors of specialized lights for machine vision, and they can help recommend lighting. When the light fails — whether from ambient light, physical constraints or difficult nonuniformities — clever algorithms can often overcome these failures.
Meet the author
Benjamin M. Dawson is director of strategic development at Coreco Imaging Inc. in Billerica, Mass.; e-mail: firstname.lastname@example.org.