Search Menu
Photonics Media Photonics Buyers' Guide Photonics EDU Photonics Spectra BioPhotonics EuroPhotonics Industrial Photonics Photonics Showcase Photonics ProdSpec Photonics Handbook
More News
Email Facebook Twitter Google+ LinkedIn Comments

Vision-Enabled Robots Improve Automation

Industrial Photonics
Jul 2014
Greg Raciti, Faber Industrial Technologies, and Steve Zhu, Teledyne Dalsa

Thanks to falling prices, better hardware and improved software, vision-enabled robots are increasingly found in smaller and smaller automation and manufacturing operations, where they improve assembly processes, conduct quality checks and enable automated final inspection.

Industrial robots have come a long way. They first appeared more than 50 years ago, according to the International Federation of Robotics. Deployed on an automobile assembly line, the first system sequenced and stacked hot pieces of die-cast metal, doing so with a 4000-pound arm. The robot cost $65,000, with movement programmed in joint coordinates and movement accuracies in the 0.0001-in. range, or about 250 µm.

Today, small industrial robots from Denso Robotics and other suppliers weigh about 75 lb and are capable of composite speeds of 8.5 m/s or more. Those movements can be made very precisely, with the robot capable of making a move to an accuracy of 20 µm or better.

Besides the decline in weight, size and costs, and the increase in speed and accuracy of robotic movement, software to control robots has also improved. Originally, robots were programmed in their own specialized, proprietary language. Today’s software is much more open and high level. Indeed, some industrial robots can be trained by line workers and therefore don’t require any programming at all.

Vision system progress

Industrial robots originally were blind, but the International Federation of Robotics says that by 1980, machine vision guided a robot to pick a randomly oriented part out of a bin. This demonstration in an academic setting was followed over the years by a host of commercial offerings.

While this was going on, machine vision capabilities evolved. Today, vision systems can offer megapixel resolution with full color and even 3-D data. However, cost and other requirements often constrain capabilities. For instance, the amount of space available to install an industrial vision system may measure only a few centimeters on a side. Optics, a sensor and a vision processor have to fit within that volume. Other parameters impacting an industrial vision system are vibration and dust, along with the need, in some settings, to withstand wash-down with jets of water.

Today, a typical robotics vision system has a resolution of 1024 × 768 pixels, or somewhat less than a megapixel. Most systems are monochrome. Compared with a color system, this approach cuts costs for the same resolution or increases resolution for the same cost.

Of course, some industrial applications demand color. For example, color may be required when the finish on a part is being checked. Likewise, color may be needed for sorting parts. In such situations, a color-capable camera and vision system must be used.

Systems that capture 3-D data are growing in acceptance, but they tend to be complex and expensive, at least when compared to 2-D vision technology. The results from 3-D systems also may be somewhat problematic, but the technology is being adopted widely in the consumer space such as in gaming, which likely indicates it could soon find a home in industrial applications.

Software is much easier to use and more capable than before. For instance, specialized software can make the common task of inspection easy for even nonprogrammers to tackle. In modern software, the setup for camera triggering and lighting control can be handled by moving a slider, for example. The software can include such tools as ID and text reading, measuring, counting, feature finding and color verifying, as well as bead and surface checks.

Another advance that has made vision easier is the deployment of Ethernet IP (Ethernet industrial protocol). This makes connecting a new device as easy as plugging it in, letting the system assign it an address, and then navigating to that address to set parameters and capture data. The future promises even easier networking in the form of wireless connectivity, although this will take place when the factory floor fully embraces Wi-Fi and other forms of wireless technology.

Automation applications

One result of the advances in robotics and vision is that pixels in a vision system can now be easily tied to a spatial coordinate. This allows users to pinpoint the locations of objects or features, enabling dimensional measurements or other tasks using an image alone.

One way to accomplish calibration of pixels to points in space is to use a robot to place a target at a location and then use a camera to image the target. Repeating this at a number of known locations leads to a linkage of pixels to real-world coordinates. Until the camera is moved or other system changes are made, the location of any object in the field of view can be determined with a high degree of precision.

As this example illustrates, the combination of vision and robotics can yield a system with capabilities beyond those of either of the two technologies alone. This can also be seen in the two broad areas of automation applications of vision-equipped robots: guidance and quality assurance.

Robotics for guidance

An example of a guidance application would be the picking up of parts from one area and placing them in another during an assembly process. Before the advent of modern vision technology, this typically would have been handled through the use of vibrating bowl and fixtures or another means to mechanically orient the incoming parts as needed. With vision, a camera can measure where a part is and determine its orientation. This information can be used to guide a robot arm to the appropriate piece part, which the arm would pick up and place where needed.

This approach assumes that unique features on the incoming part allow the orientation – and, if necessary, the type – of the part to be determined. Experience has shown this to be true in most cases. Another assumption is that ambient light does not interfere with this identification. Sometimes, that is not the case on plant floors – where, for example, nearby welding operations can lead to intense and intermittent light. Passing parts over a brightly backlit transparent conveyor section can eliminate this problem, resulting in better and more consistent part identification.

Robotics for quality assurance

A second broad category of automation applications is quality assurance. Completed products can be checked to ensure that labels are correctly affixed, that bar codes are present, or that enclosures are completely sealed. For these applications, a multiaxis robot arm can pick up a part, present it to a camera and then rotate it. This approach enables a single camera to perform an inspection that otherwise would require many cameras. It also means that hidden surfaces, such as the section of a part pressed down against a support, can be inspected.

Presenting a part in this way ensures uniform lighting, an important requirement for machine vision. A final benefit to this approach is that it requires only one pixel-coordinate calibration, which means that dimensional measurements can be made without going through an overlay process to align different cameras’ pixel coordinates.

Vision and robotics limitations

Vision-enabled robots are not, of course, the solution to all automation problems. A situation demanding a wide field of view at high resolution is one in which they would not work well. Another is where very large objects are involved. In both cases, the constraints of physics limit the applicability of current technology.

An example of the first case arises when pick-and-place must be done over a large area measuring more than 3 ft on a side. Out of this meter-on-a-side space, parts must be located with millimeter precision. No matter how high the resolution of the camera, barrel distortion from the lens will mean that errors at the edge of the field of view will be substantial.

A solution might be to use a line-scan camera, which is designed to capture single-pixel-high swaths across a large field of view at high resolution. Line-scan cameras are sensitive to depth-of-field changes and so may be suitable only for certain situations.

The problem of achieving high resolution over a wide area can also be resolved by mounting a camera on a robot arm and sweeping it across the region. The same approach can be used when large parts such as an aircraft fuselage must be inspected. The trade-off here is of time for resolution. If the time allotted for an inspection is too short, this approach may not work.


For those considering vision-enabled robotics for automation applications, a few guidelines must be kept in mind. On the vision side, flexibility is important. Knowing the needs and environment is important for the robotics.

For vision systems, flexibility means having communication options. Ethernet connectivity is virtually mandatory today, but other methods may also be required for legacy or other support. Another area requiring flexibility is the vision system’s ability to handle multiple cameras, and potentially cameras of varying resolutions. Such adaptability is important because the scope and requirements of automation projects can change between the initial concept and the final deployment. It is best not to be limited in terms of available equipment, tools or software routines.

On the robotics side, the required parameters must be well understood. If six-axis-of-freedom movement is not needed, for example, system cost and complexity can be reduced. Precision and speed of motion are other specifications a robot must meet. So, too, is the expected operational lifetime, as that will set a floor for the mean time before failure.

Speaking of mean time before failure, it can be adversely impacted by the industrial environment. Food processing, for instance, often uses water jets for wash-down, sometimes with caustic chemicals. Having a robot or an enclosure rated for the proper ingress protection, such as IP67 in the case of food processing, is important.

Robotic assembly and inspection in action

Istech built a custom assembly machine for a medical device customer to apply lids to sheets of plastic substrates. Each sheet has six silk-screened substrates, or coupons, on it. To apply the lids reliably and accurately, and to meet production goals, a series of features on each lid must be aligned with each corresponding coupon.

Istech uses advanced computer and robotic technologies to provide turnkey custom automation solutions for a variety of industries. Photo courtesy of Teledyne Dalsa and Istech.

In the system Istech built, four high-resolution GigE cameras are connected to a Teledyne Dalsa GEVA vision system. After a sheet is loaded onto a moving vacuum table, the sheet cameras identify and locate the six coupons. The lid cameras identify and locate the corresponding features on each lid. Teledyne Dalsa’s Sherlock vision-system software then performs calculations and provides X, Y and rotational correction values, which are transferred to an Epson robot that positions the lid on each coupon accordingly. Heat sealers then attach each lid. Once all six coupons are complete, the sheet is offloaded to a stack for further processing.

GigE cameras and vision software allow a robot to position the lid on each coupon accordingly. Photo courtesy of Teledyne Dalsa and Istech.

Meet the authors

Greg Raciti is the engineering manager at Faber Industrial Technologies in Clifton, N.J.; email: Steve Zhu is the director of sales for Asia at Teledyne Dalsa in Montreal; email:

The processes in which luminous energy incident on the eye is perceived and evaluated.
Terms & Conditions Privacy Policy About Us Contact Us
back to top

Facebook Twitter Instagram LinkedIn YouTube RSS
©2017 Photonics Media
x Subscribe to Industrial Photonics magazine - FREE!