Robotic System Models Human Vision
Laurel M. Sheppard
BALTIMORE — A major step toward making science fiction a reality -- developing robots that can see like humans under real-life conditions -- is under development at Johns Hopkins University. With the aid of a chip-based vision system, a toy car was able to follow a line around a track, avoiding obstacles along the way. Visual tracking applications could include surveillance, surgery, manufacturing and videoconferencing.
With a single-chip vision system, this toy car can navigate around a test track while avoiding obstacles. Photo by Mike McGovern.
Traditional approaches to solving real-world problems with computer vision have depended heavily on charge-coupled device cameras and workstations. Although these approaches are better at handling the huge amount of data produced by the cameras, they still require large, expensive systems.
The new system -- developed by Ralph Etienne-Cummings, an electrical engineer at the university -- is based on a single 6.4 × 6.8-mm chip. The computational sensor, which performs analog and digital processing, and makes decisions and communicates them to the robot, is expected to lead to the development of compact systems with large computational capabilities at low cost (in terms of hardware and power).
To reduce the amount of data collected for a specific application, it can be custom-designed to extract only relevant information and to present this information to the simple microprocessor. A general-purpose computational sensor has also been proposed.
The chip's design borrows from nature, modeling the behavior of a primate's visual tracking system, namely the retina and parts of the brain. The retinal portion of the chip uses photodiodes because of their fast response characteristics, as well as several types of circuits that mimic the first three layers of cells in the retina.
Electronic gain controls -- which are much less expensive and simpler than their mechanical counterparts -- allow the imaging system to operate in a wide range of ambient lighting conditions. The chip operates over five orders of magnitude of speed, and uses a fast 128 × 128 complementary metal oxide semiconductor (CMOS) imager that can be easily integrated with traditional analog and digital circuits. The chip is interfaced with an 8-bit microcomputer to implement fast autonomous navigation.
This imaging system can be implemented in standard integrated-circuit technologies, and CMOS circuits can be used for focal plane image processing. Other benefits include direct processing of the image data and fast readout potential. Most of the engineering bugs have been worked out, Etienne-Cummings said, and only the applications need to be further defined. He believes that, with the right amount of funding, this vision system could reach the marketplace in as little as six months.
MORE FROM PHOTONICS MEDIA