Hank Hogan, email@example.com
AUSTIN, Texas – It was nothing dramatic, just a guy jogging in place. But thanks to some technical wizardry, that image was superimposed over another in real time on a screen. So it looked as if Tim Dehne, NI vice president, was running down a road.
That feat took place in Austin, Texas, at the beginning of NIWeek 2008, which had 100 exhibitors and almost 2700 registrants. A number of camera makers were there, along with some optics manufacturers and other vision-related vendors. The mood overall was upbeat, although the number of camera vendors was down. That could be because NI is now in the camera business, with three new “smart” versions released last month.
The big emphasis by NI at its users’ conference was on green engineering. As might be expected, the company’s software and hardware are supposed to play a big role in reducing environmental impact, either through design or operation.
Behind the scenes
Dehne, of course, took his dubbed-in jog using NI software – and some Intel hardware. Thanks to a multicore processor, the system had enough oomph to image the screen before Dehne stood there, to capture the video of him running, to subtract out the background and to merge that with a recorded video of travel down a road.
For vision systems and imaging applications, the message is that multicore processors and the right software soon should enable some pretty interesting uses. An example is an optical coherence tomography system being built by Kohji Obayashi of Kitasato University in Japan. To achieve Obayashi’s goals, the system must do 1.5 million 1000-point fast Fourier transforms a second. Right now, the best possible is about 1 million. But quad core systems are due out soon, with Intel predicting 80-core chips by 2011. So the processing power should be there.
But there could be problems. In a discussion about multicore vision, Carlos Guzman of NI noted that performance may not increase linearly with the number of cores. So a quad core system is not twice the processing power of a dual core or four times that of a single core machine. There also are software/timing issues that could arise, which could become more complex as the number of cores rise. Resolving those issues is a problem for all software developers because the tools don’t really exist to efficiently manage and optimize a large number of cores.
In a talk about designing for inspection, Robert Trait of General Electric Co. discussed the building of solid models of components, which then allow systems designers to simulate inspection before building anything. That could change the way vision applications are designed and built. Interestingly, he said that light and, sometimes, optics manufacturers often don’t provide the 3-D model information needed. His solution is to do his own measurements.
Averna, an exhibitor that builds custom automation and manufacturing solutions, also does 3-D modeling of what the camera sees. The modeling helps ensure that designers know what the camera should see before anything is actually constructed.
There were some interesting applications on display. One involved a vision system that automatically tracked the movement of a rat around an arena as part of studies about memory. The system found the rat’s nose, the location of which was used to determine whether the rat remembered an object or not.
Another application involved Odin, an automated vehicle built by a team from Virginia Polytechnic Institute & State University in Blacksburg that successfully navigated urban traffic during a DARPA-sponsored contest. The vision system here cataloged objects with lidar used to determine distances.
There also were a couple of industrial inspection applications as well as a new line of robots coming out from Lego in January that have some very primitive vision/light-sensing capabilities. These robots are simpler and less expensive versions of the popular Mindstorms series. Besides being toys, they are used in high schools and universities to teach the fundamentals of robotics.