Search
Menu
PI Physik Instrumente - Fast Steering Mirrors LW 16-30 MR

Getting the Picture on Imaging Software

Facebook X LinkedIn Email
As software takes on a prominent role in machine vision, suppliers are keeping up with changing technology and offering easy-to-use programs with faster processing times.

Hank Hogan, Contributing Editor

To hear Kyle Voosen tell it, there are really only two places to be when talking about industrial and machine vision applications. Machine vision products manager at National Instruments Corp. in Austin, Texas, Voosen noted that smart cameras and standard camera buses have done more than bring vision to the masses. The new technologies, along with improved computer performance, have shifted attention from frame grabbers to other aspects of vision. “You’re really finding that the focus is now either on cameras or software,” he said.

For Voosen, that concentration is all right because his company considers software at the core of what it does. However, National Instruments isn’t the only supplier of machine vision software. Others include German companies MVTec Software GmbH of Munich and Stemmer Imaging GmbH of Puchheim and Canadian firms Matrox Imaging of Dorval, Quebec, and Dalsa Coreco of Montreal.

Of these vendors, only MVTec does not supply any hardware. However, all of them say that their software will run with a variety of cameras and other equipment, and that all are responding to general trends. These include the demand for easier-to-use software, looming changes in the basic programming environment and increasing processing power.

Easy does it

As for the first trend, vision applications years ago were the exclusive domain of specialists. That has changed. An increasing number of end users have expertise in a particular industry or application but aren’t programmers versed in the intricacies of various software languages. They also aren’t vision experts. These consumers of machine vision technology demand software that is easier to use, but they are not willing to sacrifice functionality.

Fig3_Blob_Ellipse.jpg
The blob tool supports various levels of input (gray-scale, binarized, labeled images) and allows the computation of basic and complex blob features.

National Instruments touts a solution based on graphical programming, where users select, place and connect icons. This is translated behind the scenes into commands to run hardware, analyze results and perform other tasks. In the case of vision, the company’s interactive tools allow users to select from a menu of algorithms to accomplish such tasks as pattern matching, blob analysis, reading text and deciphering bar codes. These algorithms form part of the solution, along with such things as the user interface, camera control and communication of results. The problem has been that such a user-friendly approach sometimes comes up short, resulting in an 85 to 95 percent solution to a 100 percent problem.

At that point, Voosen said, end users have traditionally faced two choices: “If you reach the end of a configuration environment, you really have to either start from scratch or convert your configuration into a programming language.”

Fig1_VBAI-_chocolate_5064.jpg
A classification tool in Vision Builder for Automated Inspection 2.6 allows users to identify, inspect and sort items. Future software based on neural networks and other techniques may allow classification based on examining all features, as opposed to having to do multiple sequential pattern matches.

National Instruments recently announced an extension of its graphical programming environment intended to get around this issue. Working with its interactive vision software, NI Vision Builder, end users can implement their own code to run custom algorithms, supply advanced analysis functions or provide the other missing pieces of a solution. Such custom code also can be provided by hardware manufacturers, thereby allowing camera vendors, for example, to optimize performance or enhance the usability of their product. With the ability to include custom code, Voosen noted, a simple-to-implement configuration tool can do more than just a pass/fail inspection.
Learning to speak the language

Other companies also are attempting to make their software easier to use. Matrox Imaging, for example, plans to release a tool this year that will address the influx of nonprogrammers and nonvision experts into the ranks of its end users. This will mark a new direction for the company. Describing the as-yet-unreleased offering, Pierantonio Boriero, product line manager, said, “It’s not to replace the programming environment. It’s complementary and for a different class of customers.”

For several years, Matrox has been introducing image analysis and processing tools, such as geometric pattern recognition technology, an edge finder and the latest, a string reader that debuted in 2004. The string reader, which builds upon the company’s earlier geometric pattern recognition software, has been used to read license plates. Other popular tools, Boriero said, are those that read bar codes and data matrix symbols, both of which can identify parts and products.

One looming change that is affecting software vendors was created by Microsoft Corp.’s .Net framework. The Redmond, Wash.-based company advertises .Net as a way to connect information, people and applications through Web services. For developers, one of the advantages is that it is language-independent, so that solutions and libraries developed in one language can be used in another. The .Net framework has been welcomed by database and Web services developers.

The same level of acceptance has not happened in the machine vision community. Programmers in this area are a bit more conservative, particularly because the software has to make decisions in real time. However, the expectation is that there will be a slow move toward .Net. Consequently, all of the software providers either already do or are planning to support the new programming framework.

Not to be outdone in the ease-of-use department, Bruno Ménard, software team leader of the vision processors group at Dalsa Coreco, said that his company already has software that is easy to program. Experienced machine vision developers, such as system integrators and OEMs, are no longer the only targets for the company’s products. The new group of customers includes those who require help because they aren’t familiar with vision algorithms or applications.

Videology Industrial-Grade Cameras - NEW 2MP Camera 2024 MR

Under the hood

“You have to shorten the learning curve and provide tools to minimize the effort of learning all those complicated concepts,” Ménard explained. He said that image analysis tools such as geometric pattern matching, optical character recognition and bar-code reading can be complex and that Dalsa Coreco’s intelligent tools free end users from having to tweak parameters to achieve results.

Fig2_Barcode_QR.jpg
The bar-code decoding tool supports a variety of 1-D and 2-D bar codes,including Data Matrix EEC200 and QR Codes.

One reason for ease-of-use and other enhancements is the increasing power under the hood resulting from advances in hardware and software. Ten years ago, pattern matching could take hundreds of milliseconds, while today it can be done in only a few. That allows systems to monitor rapidly moving assembly lines and relieve programmers from worrying about details such as capture control, and memory and display management.

Although the processing power of computers has increased, there can still be a need for add-in processing boards. The advent of the PCI Express bus has enabled communications at speeds of multigigabits per second between board and host. Vendors can now make vision processing boards that provide an extra kick, such as the acquisition of images at higher speeds. However, that ability also could be used in other ways. One possibility, noted Dalsa Coreco’s product manager, Inder Kohli, would be using processors on a board to provide extra computational capabilities for software running on the host.

A result of increased computer power is more robust software and applications. Software can now compensate for things such as changing lighting, camera defects and inhomogeneous backgrounds. In the case of Dalsa Coreco, the software is helping provide what the company calls trigger-to-image reliability. By snaring the messages that flow between software components, this programming approach can help diagnose out-of-the-ordinary situations. This could include a case where a camera acquired an image while the system was busy processing another frame and, as a result, the image was lost. Tracking down such a miss can be labor-intensive, but the process is made easier if the behind-the-scenes software messages are captured.

Fig4_CamExpert_1.jpg
The camera configuration utility of the Sapera LT development library is extremely intuitive and easy to use.

“We log it; we time it; you can track it. Further down the line, you can actually say, ‘This object was really not inspected,’ and you can discard it or recycle it,” Kohli said.

Training by example

Stemmer Imaging has put the increased processing power to use in its latest object recognition tool, which was released earlier this year and is part of the company’s Common Vision Box platform. The shape-finding tool uses a lower-resolution image to locate candidate objects, followed by a high-resolution image search of the already determined likely locations. After that, the results are supposed to be accurate enough for classification of shapes and objects.

However, even more precise results are possible. Using gray-scale correlation analysis at the relevant sites, Stemmer’s system can achieve a positioning accuracy down to about a 1/30 pixel — the resolution limit for certain hardware configurations. Despite this performance, the company reduced the number of adjustable parameters needed for searching and training.

Reducing training is another area where increasing computing power could be put to use. As mentioned before, there are software tools to determine an edge or match a pattern, but the developer provides the criteria used by a program in classifying an item as good or bad. That’s changing as software vendors put neural networks to use.

In a neural network, connections exist between processing elements, which are the computer equivalent of neurons. The organization of the connections and weight of the elements determine the output. The system is trained by adjusting weights to produce the desired outcome.

MVTec uses this technique for its ActivOCR. Thanks in part to its multilayer neural network classifier, the software is said to have an out-of-the-box optical recognition rate of almost 99 percent.

“ActivOCR includes many ready-to-use fonts with a practical industrial background, such as dot matrix prints, prints on metal surfaces, pharmacy or document fonts. More than 1 million characters were trained,” said Lutz Kreutzer, a spokesman for the company.

National Instruments also has a classification tool based on neural network technology. According to Voosen, the system learns on a few representative samples and then uses the neural weighting developed with those to classify other parts as, for example, nuts or bolts. It can handle high-speed machine vision applications where coarse distinctions have to be made. The system takes about 20 to 40 ms to classify a part, replacing a process of redundant pattern matches that could take seconds. Voosen expects that continued advances in hardware will enable similar performance for fine-distinction applications within the next five years.

When that happens, systems will be shown a good and bad part and will automatically come up with differentiators. Speaking of such a self-training system, Voosen said, “With a little bit of nudging and guidance from the user, it could build those networks itself.

Published: July 2005
camera busesCommunicationsConsumerFeaturesindustrialmachine vision applicationsNational Instruments Corp.

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.