Search
Menu
Meadowlark Optics - SEE WHAT

Frame Grabbers Fine-Tune Machine Vision

Facebook X LinkedIn Email
The right combination of lighting, optics and frame grabbers means the difference between good and bad inspection results.

Philip Colet, Coreco Imaging Inc.

Frame grabbers convert video images from cameras into digital format and transfer these digital images to PCs, which use the converted data to make decisions about the objects being inspected. Although performing these functions quickly and reliably is critically important to the success of machine vision inspection, frame grabbers are much more than mere data converters and transfer conduits. They offer a range of capabilities that can compensate for poor lighting, optics and ways that camera sensors present information to them. This increases the quality of the images acquired and, ultimately, helps the machine vision system perform more reliable inspections.

Coreco_Fig1.jpg
High-performance frame grabbers like Coreco Imaging’s X64-CL can help compensate for poor lighting, optics and data presentation from the camera’s sensor to the frame grabber.

Although frame grabber selection may seem trivial at design time, choosing the wrong one can add months of development time and, potentially, lead to the failure of the machine vision system design.

“Garbage in, garbage out” is a truism for any computing application, and machine vision is no exception. The system must have a clear, complete and undistorted image to work with. When lighting conditions are insufficient to properly illuminate an object, or when the reflectance attributes of the object are highly variable, the results are likely to be less than reliable.

Lighting might seem like a simple enough concept, but achieving proper lighting during inspection is anything but simple. In fact, it remains a challenge even for seasoned machine vision professionals with knowledge of techniques, such as front, back and structured lighting, and with an arsenal of illuminators, from fluorescent bulbs to LEDs.

Common machine vision lighting issues include:

• Varying levels of contrast on the image being inspected. For example, a polished wafer that has gone through some processing still has a nonuniformly reflective surface, making it difficult to capture consistently illuminated images. Tracking or locating fiducials under these conditions is extremely challenging and has a direct impact on manufacturing quality.

• Background noise. This is a common problem in packaging inspections, where the product being examined is packaged in a colorful, graphical container or displayed on a busy background. All of these can make it difficult to pick up the area of interest.

• Lighting problems. These can range from insufficient or excessive light to uneven light across the camera’s field of view.

Illumination compensation

Certain frame grabber capabilities and techniques, however, can compensate for poor lighting conditions and for the camera sensor’s inability to redress these issues, ensuring that the machine vision system has a reliable image with which to work.

Input signal conditioning, for instance, is key to minimizing the effects from camera variability or lighting fluctuations. In an analog frame grabber, range and offset controls compensate for either too much or too little lighting. They also maximize the digitization of the image at certain intensities. With digitization maximized over the desired video range, the machine vision system has more accurate data to analyze, resulting in better system performance.

Some other lighting and camera effects that can deteriorate the quality of the images are the uniformity of the lighting field and the gamma efficiency of each pixel on the camera sensors. In the former, light intensity decreases from the center to the edges of the lighting field, while in the latter, gamma efficiency (an individual pixel’s ability to convert photons to electrical charge) can vary by up to 10 percent. Real-time flat-field correction, a feature available on some of the more sophisticated frame grabbers, can adjust the image data before it is transferred to the host to compensate for the effects of nonuniform lighting and variable gamma efficiency, providing a higher-quality image.

Look-up tables also are frequently used to modify the input signal data for easier signal processing or better display capabilities. Each is basically just an array of registers. The index of a particular register is a pixel’s value; if a digitized pixel has value 128, it would point to register 128 in the array, and the value of the register would become the new pixel value. For example, assume that an application requires the inspection of objects that are acquired as 8-bit-per-pixel monochrome images (meaning that pixel values can range from 0 to 255).

To compensate for the light/dark contrast of each one, some pixels need to be made darker and some lighter. The end user would review the image, one pixel at a time, using the value of each as an input index to the look-up table. Once the correct position in the table is located for a given pixel, it is necessary only to replace the original pixel’s value with the corresponding new value.

Faster processing

Such tables are extremely effective tools for equalizing or normalizing images captured under poor lighting conditions. Unfortunately, when implemented in software, they can consume a considerable amount of processing time because of the extensive amounts of memory read/write operations. By using a frame grabber with the tables implemented in the hardware, this time-consuming task can be offloaded from the host to the frame-grabber board, speeding processing times.

Histogram equalization also is an important tool. By depicting the dark/light value of each pixel and the contrast among pixels in an acquired image, histogram graphs are useful guides for changing the appearance of the image. Histogram equalization allows the creation of an image with optimal lighting and contrast. By spreading out or “equalizing” the pixels in a histogram of an image with dark, low contrast, for example, it is possible to obtain a uniform pixel density to improve contrast — and provide the machine vision system a better-quality image with which to work.

Cognex Corp. - Smart Sensor 3-24 GIF MR

If you get new glasses, you see better, right? Not if the prescription is wrong or if the optician grinds the lenses for an astigmatism that you don’t have. Similarly, investing in new, top-of-the-line optics by no means assures that all of the images a camera acquires and hands off to a frame grabber will automatically be as sharp as you want them to be.

A frame grabber cannot compensate for a bad optical design or for a poorly matched lens/camera combination. However, choosing the right device for each optical system can maximize the potential of more powerful optics — whereas choosing the wrong one simply wastes money without achieving the desired increase in resolution.

If, for example, a system designer specifies a high-resolution camera and lens with the needed resolving power, but selects a low-end frame grabber that cannot digitize high-resolution data, there was no point investing in high-end optics. Machine vision system designers must properly match the optical system to the frame grabber to achieve the desired resolution.

Another use for a look-up table enters the picture. Every lens system is like a pair of sunglasses in that it affects different wavelengths of light that pass through the optical path in different ways. Certain wavelengths can pass unaffected, while others may be attenuated. If the machine vision system is inspecting for red apples and the lens attenuates red light, the frame grabber can compensate by boosting the red signal through an RGB look-up table to ensure that all pertinent information reaches the system.

Systems also must compensate for image distortion, a side effect of some optical systems that warps the image, especially toward the edges. Typical examples are squares that curve inward or outward. If the inspection goal is to measure the square and make sure that it’s exactly the same size as all the other squares, a distorted image will yield flawed results. Or, if the object of the inspection is to search for a crosshair fiducial and the distortion at the edges of the lens is causing the crosshair to turn into a crossbow, the pattern-recognition software will never locate the fiducial.

Warping algorithms can resolve distortion problems by “unwarping” the image so that it looks “real” again, enabling the machine vision system to gauge the square based on the correct parameters. When standard cameras are used in a machine vision application, warping is usually done in software. However, when high-end cameras — such as line-scan cameras that generate images at rates of up to 160 MB/s — are used (as they are increasingly), the camera/software combination cannot warp images in real time. High-speed frame grabbers can, and warping is performed directly in the frame grabber hardware — typically, in an embedded processor.

Coreco_Fig2.jpg
In this application, a line-scan camera is used, and the narrow band of light over the object being inspected takes on a parabolic shape, creating intensity differences among the pixels at the edge and in the middle of the inspection area. Flat-field correction can compensate for these differences in pixel intensity.

Data presentation

The manufacturers of CCDs for high-end, multiple-tap cameras are concerned about their sensors’ light-gathering capabilities, image quality and the speed with which acquired data can be transferred to the frame grabber. CCDs are, therefore, designed to maximize these three characteristics. They are not designed to acquire and present data to the frame grabber in a user-friendly format; e.g., when an image of an inspection object or feature is displayed, it often doesn’t look anything like the object or feature that you are inspecting.

One common occurrence in machine vision applications is that information received from the camera is upside down, backward (mirror image) or worse, because different sections of the image can be affected in different ways. Fortunately, using on-the-fly resequencing, some frame grabbers can compensate for this poor data presentation.

Some multiple-tap cameras break data down into four quadrants. High-speed frame grabbers read data from multiple-tap sensors at the intersection of these four quadrants and resequence the data in real time, at speeds of nearly 1000 fps. Performing this function within the frame grabber takes a huge load off the host computer, freeing it up for other processing tasks.

Coreco_Fig3.jpg
At left, a cluster of tall and short spikes on the left of the X-axis and nothing on the right, indicates a dark, low-contrast image. At right, the tall spikes have thinned out and the short ones have expanded to cover the entire X-axis. Equalizing the distribution of the pixels improves the image contrast.

There’s a lot more to frame grabbers than meets the eye. These powerful machine vision components do not just blindly acquire and transfer data from CCDs. Frame grabbers can also compensate for suboptimal lighting, optics and methods of receiving information from camera sensors, transforming incomplete, distorted, dark or otherwise poor-quality data captures into complete, clear, high-contrast images.

Better image quality ensures more reliable inspections, which in turn leads to fewer product defects, higher profits and all of those other reasons why users purchase a machine vision system in the first place.

Philip Colet is the vice president of sales and marketing for Coreco Imaging Inc. in St. Laurent, Quebec, Canada.

Published: September 2003
Consumerdigital imagesFeaturesframe grabbersindustrialSensors & Detectorsvideo images

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.