Close

Search

Search Menu
Photonics Media Photonics Buyers' Guide Photonics EDU Photonics Spectra BioPhotonics EuroPhotonics Industrial Photonics Photonics Showcase Photonics ProdSpec Photonics Handbook
More News
SPECIAL ANNOUNCEMENT
2016 Photonics Buyers' Guide Clearance! – Use Coupon Code FC16 to save 60%!
share
Email Facebook Twitter Google+ LinkedIn Comments

For Vision Systems, Lighting and Other Advances Up Capabilities and Cut Costs

Photonics Spectra
Jun 2016
Better sensors, more powerful processors and sophisticated algorithms are advantageous in opening up new mass market applications.

HANK HOGAN, CONTRIBUTING EDITOR, hank.hogan@photonics.com

For vision systems, lighting is critical. Now, the wide availability of LEDs offers new choices, such as being able to create complex illumination patterns. There also are new and improved sensors, processors and algorithms. Combined, these advances promise to make vision systems smaller yet more capable and affordable.

That is what Tom Brennan sees happening. He’s president of Denver-based Artemis Vision, a machine vision solutions integrator.

For instance, according to Brennan, a beneficial drop in the cost of LEDs has been accompanied by an equally helpful widening availability. Before, when a supplier like Artemis Vision would have a need for nonstandard lighting, it would be difficult to even cobble together a proposed solution due to the very small volumes involved. Consequently, projects would languish, but the situation has changed.

Recent advances in sensors benefit machine vision applications.

Recent advances in sensors benefit machine vision applications. Cameras can now see in greater detail than before, which means more precision and better control for applications such as vision-guided motion, driver assistance and quality inspection. Courtesy of National Instruments.

“You can now just go on various websites and order raw LEDs and make your own lights in a way that’s much easier than it was years ago. So it really opens up the avenue to design your own lighting schemes as needed,” Brennan said.

With the advent of easily obtainable individual LEDs, Artemis Vision can now purchase as few as four and produce an evaluation solution. If that’s successful then the lighting can be mounted in a housing and then, perhaps, turned into a standard part type that the company can produce.

“It helps a lot in terms of cost and lead time,” Brennan said of the new lighting landscape.

The lighting technology advances have been complemented by improvements in the capabilities of sensors, processors and algorithms. These vision systems components have also seen price drops that have made it possible for smaller machine vision solutions suppliers to keep systems on hand strictly for testing and evaluation.

Vision innovations based on lower-cost components make it possible to tackle an inspection station (a), a pit crew helmet with mounted cameras (b, c), or other lower-volume applications. C

Vision innovations based on lower-cost components make it possible to tackle an inspection station (a), a pit crew helmet with mounted cameras (b, c), or other lower-volume applications. Courtesy of Artemis Vision.

When asked about vision innovations, Eric Jalufka, product manager for vision hardware and software at National Instruments Co. of Austin, Texas, began by discussing sensor progress, such as increases in available resolution. Whereas a few years ago a 5-MP resolution would be state of the art, today sensors are moving into the 20+ MP range for area scan cameras. More pixels and higher resolution make it easier to detect fine details.

Further, sensors offering higher dynamic range with very low noise are appearing. Those sensors can also handle higher frame rate imaging, meaning that they can capture events that take place in a shorter time period than was possible before.

“Traditionally, it was hard to find those qualities all in one sensor. But now we’re starting to see these nice benefits offered in a single sensor and it’s something that’s available to the machine vision market. It’s not just constrained to a high performance, scientific lab camera,” Jalufka said.

Grayscale images of parts (a) and (b) illustrate the difficulty of spotting parts.

Grayscale images of parts (a) and (b) illustrate the difficulty of spotting parts. But with point cloud images (c, d), practical thanks to vision systems advances, identification is easy. Courtesy of Images courtesy of 2016 KINEMETRIX (www.kinemetrix.com).

The combination of characteristics can be advantageous in opening up new mass market applications. For instance, cars routinely travel from bright sunshine to a dark tunnel or vice versa in a fraction of a second. They also are used day and night. A driver assist solution must be able to deal with this. A high dynamic range sensor helps because it means that the camera performs well when going from light to dark or back again.

More bits, more challenges

Such sensor innovations are not an unmitigated benefit. A higher dynamic range, a greater frame rate and higher resolution all mean that the sensor produces more bits in a given period of time. Those bits have to be transmitted. For that reason, Jalufka sees sensor technology driving the adoption of higher bandwidth communication standards, like USB 3.0.

The increased number of bits also puts a strain on processors, he added. That data has to be run through calculations and algorithms, and as the number of bits goes up that burden increases.

One way to address this problem is to use higher performance processors. Another approach, which is increasingly employed, is heterogeneous processing. Here, a traditional processor handles part of the chore and a graphics processing unit (GPU) or a field programmable gate array (FPGA) handles the rest. The key is to know which mathematical methods that turn image data into numbers and actionable information, or algorithms, should go through which calculation engine.

Software and algorithm innovations make vision systems easier to use and more powerful.

Software and algorithm innovations make vision systems easier to use and more powerful. Courtesy of Cognex.

“We can put the algorithms that are best suited for the FPGA on the FPGA and the ones that are better suited to the CPU [central processing unit] on the CPU. Those two elements can work together to increase overall throughput so you can process that data faster. You can make decisions faster and increase your throughput,” Jalufka said.

Some tasks, like thresholding for particle analysis, work well on an FPGA, he said. On the other hand, pattern matching and more advanced algorithms are more efficiently dealt with by a processor.

The higher sensor capabilities, greater processing power and improved algorithms increase imaging solution capabilities, according to Bob Voigt, chief technical officer at Resolution Technology Inc. of Dublin, Ohio. The company supplies machine vision components, systems and custom development for manufacturing applications.

For instance, inspection can now be done in three dimensions while parts move by at full speed. Pick-and-place systems used to require parts be precisely positioned and oriented in order for high accuracy recognition of items. Now, they can be randomly oriented.

“Systems can now monitor ‘made-to-order’ production lines and switch inspection routines for each item coming down a line,” Voigt said, in describing another example.

In manufacturing, almost any process that requires visual inspection by an operator or technician can now be done by an automated vision system, he said. Outside of manufacturing, these improvements mean that imaging solutions are now showing up in consumer products ranging from cars to bike helmets to ovens.

Powerful stand-alone systems emerge

Vision systems are benefiting from developments in consumer mobile technology, said Robb Robles, principal product marketing manager at vision systems solution supplier Cognex Corp. of Natick, Mass. There is, for instance, the ongoing increase in processing power. One consequence is a change in the configuration mix of vision solutions.

These can be divided into a PC-based solution or a stand-alone system. The latter combines sensor and processor along with other components to output an answer, like a part being present or not. Examples are smart cameras, embedded vision systems, and the like.

“We’re going to do things today with a stand-alone vision system that previously we had to use a PC for,” Robles said. “Stand-alone vision systems with embedded processors have become fast enough for the majority of applications.”

However, a PC-based system might be needed for large images, like a 21-MP camera, he added. He noted that the greater power of a PC might also be needed in complex applications where a large number of cameras are used, but that a PC-based system often is more expensive to develop and deploy than a smart camera vision solution.

Another shift has taken place in the sensor space. At one time CCD was the only option. But, improvements in CMOS sensor performance mean that most applications have switched to this technology, in part because it uses less power and generates less heat.

Vision systems have also benefited from riding along with another consumer mobile technology: cameras. Kerstin Prechel, product manager at Ahrensburg, Germany-based vision systems maker Basler AG, noted that the sensors in these devices have gotten significantly higher in resolution while staying the same size or shrinking as phone makers squeeze more pixels into their devices.

For machine vision systems, the trend, which is driven by the introduction of smaller pixels, has made higher resolution possible. There also have been parallel developments in lenses, meaning that even these small pixels can now be resolved.

“This led to good usability of C-mount lenses in applications that previously needed high-quality cameras and lenses that are much more expensive,” Prechel said. “A next interesting trend might be the need of the automobile industry for cameras that will need different sensor abilities.”

Other trends she sees involve the arrival of 3D as well as very inexpensive vision systems. Both will expand the possible applications. Basler is also noticing a trend toward solutions that are just good enough, as price alone becomes a more and more important feature.

Advances make 3D images from vision systems practical.

Advances make 3D images from vision systems practical. Courtesy of Basler AG.



There are, of course, vision system aspects that still need improvement, including lenses and optics. Artemis Vision’s Brennan said they tend to be a weak spot of a system, in part because over time in an industrial environment they have a tendency to work their way out of alignment. Resolution Technology’s Voigt noted that lighting optics are often overlooked and need to be better designed.

Speaking of lighting, no matter the problems solved or the technology advancements made, one thing will still be true. Lighting will remain a key part, perhaps as much as 80 percent, of a vision solution.

Lance Riek is an engineer and co-founder of Sensory Labs. A machine vision integrator based in Bozeman, Mont., the company develops both traditional machine vision applications and aerial imaging systems for manned aircraft and drones. For its aerial projects, Sensory Labs often adapts machine vision cameras to the job, and so a background in integrating cameras is helpful, according to Riek.

Also important, no matter the task, is the proper illumination. In discussing this, Riek said, “Lighting is critical for all machine vision applications — if the lighting doesn’t reveal it, the camera can’t see it.”


Comments
Terms & Conditions Privacy Policy About Us Contact Us
back to top

Facebook Twitter Instagram LinkedIn YouTube RSS
©2016 Photonics Media
x Subscribe to Photonics Spectra magazine - FREE!