Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics Buyers' Guide Photonics Handbook Photonics Dictionary Newsletters Bookstore
Latest News Latest Products Features All Things Photonics Podcast
Marketplace Supplier Search Product Search Career Center
Webinars Photonics Media Virtual Events Industry Events Calendar
White Papers Videos Contribute an Article Suggest a Webinar Submit a Press Release Subscribe Advertise Become a Member


For Image Processing, More Power and More Challenges

Hank Hogan, Contributing Editor, hank.hogan@photonics.com

Increased resolution, 3-D measurement and ease of use are placing ever-greater demands on vision software.

In some ways, the machine vision market is no different than its consumer counterpart. Take, for instance, the question of megapixels. The big growth now is in 2- to 5-MP sensors, with 20+-megapixel sensors in niche markets, according to Rick Roszkowski, senior director of marketing for the vision products at Cognex.

“All of these pixels are putting pressure on the vision algorithms, in terms of speed and memory requirements,” Roszkowski said.


Using image processing, a machine vision system verifies cap color, cap closure and label placement on over-the-counter drug containers. Photo courtesy of Cognex.


Fortunately, image processing software is running on more powerful systems than ever before, thereby enabling faster and more accurate determination of a feature’s shape and size. But 3-D measurement capability, increased ease of use and other advances are forcing software makers to come up with new solutions.

Handling more megapixels

Users like more megapixels, according to Roszkowski – even if they aren’t really necessary. As a general rule, the goal is to have 10 pixels per unit of desired accuracy, with a greater ratio bringing little added benefit, he indicated. An example of diminishing returns comes from optical character recognition: Once the characters get beyond 40 pixels tall, there is no real increase in useful information.


Machine vision-based optical character recognition verifies data and lot code per FDA regulations. Increasing the number of pixels used for character recognition produces diminishing returns in accuracy. Photo courtesy of Cognex.


What does increase is the data processing burden. Since most pixel counts go up in both X and Y, the load goes up geometrically. Add to that the computational burden of auto-learning features and an easier-to-use interface, and it’s clear that image processing software faces a tougher and tougher road. What’s more, for machine vision tasks, the software must come back with an image-derived determination in a set amount of time.

Cognex of Natick, Mass., a maker of machine vision systems, attacks this problem in part by providing PC vision systems and smart cameras. The former offer tremendous processor performance and loads of memory. The latter are smaller and technologically more stable. The two can work together, with a PC vision system being used for development of software and its training. Once ready, the software and associated database can be downloaded and run on a smart camera.

Looking forward, Roszkowski sees two trends. One is the tighter coupling of vision systems and image processing with robotics. This involves calibration between the two, with a robot arm, for instance, being instructed to move a certain distance and the captured image being used to convert pixels into robotic units of movement. Linkages like these between robotics and vision are increasingly necessary.

“It used to be hard, tooling and fixturing to get the part oriented so the robot could pick it. Now it’s soft tooling and parts are coming by, and vision is finding it and telling the robot the orientation, and the robot can then go pick it,” Roszkowski said.


3-D measurement and associated image processing enable confirmation of height in selected areas. Photo courtesy of Keyence Corp.


It’s a 3-D world

The second trend cited by Roszkowski involves 3-D measurements. A variety of ways exist to do this: using a laser beam to create a point cloud, bathing a surface in structured illumination, or using two cameras to create a stereo image. No matter how it’s done, the result is an increase in image processing load.

Elmwood Park, N.J.-based Keyence Corp. of America has tackled the problem of growing demand for image processing through a divide-and-conquer approach, according to Douglas Kurzynski, project manager for machine vision technology. Thus, Keyence opted to go with cameras and accompanying controllers, with the latter running the image processing software. This method allows the company to throw such advances as multicore systems at the processing problem.

As for the company’s software, that is growing more powerful. It offers, for example, tools such as scratch-defect extraction, stain detection, optical character detection and dimensional measurement.

The software is growing simpler at the same time, Kurzynski added. “A lot of the more advanced settings are hidden behind the main menu so novice users don’t get confused by many different settings.”


Image processing tool examples include scratch-defect extraction filters, which allow the vision system to bring out only linear defects on difficult target surfaces. In (a), the linear stain on this metal component cannot be detected because of the minute rough edges on the background, but in (b), the linear stain is extracted because the software ignores background noise. Photo courtesy of Keyence Corp.


The company’s image processing solution runs only on its own equipment, which spans the range from 0.3- to 21-MP sensors. Some versions of the hardware can create 3-D data, and the software handles this.

Simpler software

Originally known for imaging-related hardware, Matrox Imaging of Dorval, Quebec, Canada, has also for years been supplying image processing software. What the company observed some years ago about its Matrox Imaging Library is that a good programmer is hard to find. And the problem was growing more acute, with image processing expertise becoming increasingly scarce, noted Pierantonio Boriero, product line manager.

The only way the company could expand the market for its software was by making it simpler to use, he added. An example of how Matrox intends to achieve this can be found in the company’s flowchart-based, hardware-independent machine vision software, Design Assistant 4.


Image processing software can monitor or control projects running on different computers or smart cameras, presenting results via a human-machine interface, or HMI. Photo courtesy of Matrox Imaging.


“You’re presented with a canvas where you construct a flowchart for the program logic, and then an operator-interface Web page that links back to the flowchart,” Boriero said.

When done, this result is then deployed to the host system that will run the package. Originally, the software’s output was only for the company’s smart cameras. The latest version can now target any PC with a GigE Vision and, soon, a USB 3.0 camera connected to it.

Such software, in addition to being easier to use, also illustrates the challenges that suppliers face. Historically, image processing software for machine vision applications handled only vision-related chores. Thus, the software might include algorithms to find an edge or measure a dimension. But that was all it did.

Today, the software must do all that while also supporting the human-machine interface that controls how people interact with and control machines. The software must also now support industrial communication protocols used by machinery on a plant floor, such as Ethernet/IP. Importantly, the software also must interface with robots as vision-enabled systems become commonplace.


Machine vision allows robots on automated packaging lines to deal with parts in varying orientations and placements.


Once, industrial-systems programmers were burdened with knitting all of these different elements together. Now these chores are being handled in the image processing software itself, Boriero said.

At the same time, the raw information going into image processing software is being improved through basic mathematically derived adjustments. For instance, Allied Vision Technologies of Stadtroda, Germany, does not make smart cameras. However, the company’s products do some basic manipulation of the data to sharpen things up before passing it along to image processing software, said Torsten Freiling, product manager.

“You can correct the image through a lookup table to increase the contrast,” Freiling said.

Dealing with perception

National Instruments of Austin, Texas, provides image processing software with a graphical interface and has been investing in machine vision technology for nearly two decades, according to Carlton Heard, vision product manager. The company’s current offerings include image acquisition and processing functions that can be used in medical research, high-volume production testing, and embedded and robotic applications.

Some advances in industrial machine vision arise from the consumer space.

As an example, Heard cited 3-D imaging, which has application in video gaming and also enables robots to put depth perception to work when doing bin picking on the factory floor.

Besides more pixels and 3-D imaging, another hardware trend is toward higher-bandwidth camera interfaces. The previous high-speed interface, Camera Link, offered a maximum rate of 680 MB per second.

“With new interfaces, such as CoaXPress, the data throughput can be up to 2.5 GB per second,” said Donal Waide, director of sales at frame grabber maker BitFlow of Woburn, Mass.

A challenge arises from the growth in the number of pixels and the increase in camera-interface bandwidth. Together, they drive up computational demands, and that creates a need to scale up processing power much faster than the current rate, according to National Instruments’ Heard.

He added that a potential solution is the use of field-programmable gate arrays (FPGAs). They’re inherently parallel in operation, which is a benefit, as the same mathematical manipulation is often repeated over and over on each part of an image. The technology also has low latency, making it suitable for time-sensitive machine vision applications. Finally, the technology offers good processing performance in terms of energy consumption.

However, FPGAs historically have required knowledge of low-level, hardware-specific programming languages, a hurdle that has been eliminated with the arrival of high-level programming languages. Now image processing can be split up, with a processor handling the part of the task for which it is best suited. If appropriate, an FPGA can do the rest in parallel, with this division potentially being done automatically via software.

Finally, the growing capabilities of image processing software can counter another challenge: users’ perceptions. This is due both to fictional depictions of what the technology can do and to people’s life experiences. Humans, after all, do their own image processing all the time, and that leads to certain expectations about what’s possible with image processing software.

“Many outside of the image processing market are under the assumption that if they can see ‘it’ with their eyes, then a camera or computer should be able to as well,” Heard said.

Explore related content from Photonics Media




LATEST NEWS

Terms & Conditions Privacy Policy About Us Contact Us

©2024 Photonics Media