Researchers at the University of Illinois at Urbana-Champaign are borrowing Mother Nature's tricks to make cameras with a brain. The result could be systems that distinguish birds from planes or that automatically aim themselves for distance-learning applications. Researchers modeling the visual reflex in the vertebrate brain have developed a self-aiming camera system. A neural network simulation monitors input from a pair of microphones and a camera. If the video and audio signals originate from the same place at the same time, the system aims a second camera at that location. Courtesy of Sylvian Ray. Sylvian Ray, a professor of computer science and a researcher at Beckman Institute for Advanced Science and Technology, explained that the technique is derived from research into the superior colliculus, a part of the vertebrate brain responsible for visual reflex. The structure takes in sight, sound, body sensation and other sensory information and then maps these data to the outside world. Sights and sounds that come from close points in space and time generate a signal in the brain center. For example, a cat will turn its head at night in response to the movement of branches accompanied by a rustle of leaves. Either stimulus alone would not have generated much interest, but the combination catches the cat's attention. Ray and his colleagues constructed the self-aiming cameras with off-the-shelf cameras and microphones. A neural net controls system functions. The system compares successive video frames from the camera and monitors the signal from the microphones. If the video and audio originate from the same place at the same time, the system aims the camera at this spot. "You can sense the overall environment at sort of low resolution, and then if you pick up some kind of area of interest, you can focus in on it," Ray explained. Thinking up uses The camera systems are primarily research tools, and the researchers intended them to be experimental models of how the superior colliculus works. However, the technique could find numerous applications. One possibility would be to pick out and focus on a student who raises a hand and voices a question for the remote teacher in a distance-learning setup. Another would be to spot and classify objects for military or security reasons.