Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics Buyers' Guide Photonics Handbook Photonics Dictionary Newsletters Bookstore
Latest News Latest Products Features All Things Photonics Podcast
Marketplace Supplier Search Product Search Career Center
Webinars Photonics Media Virtual Events Industry Events Calendar
White Papers Videos Contribute an Article Suggest a Webinar Submit a Press Release Subscribe Advertise Become a Member


3-D View Has Neural Base

A small area of the brain that combines visual and nonvisual cues is behind our ability to perceive depth with one eye, a University of Rochester team has discovered.

"It looks as though in this area of the brain, the neurons are combining visual cues and nonvisual cues to come up with a unique way to determine depth," Greg DeAngelis, a professor in the Department of Brain and Cognitive Sciences at the university.

Humans and other animals are able to visually judge depth because we have two eyes and the brain compares the images from each. But we can also judge depth with only one eye, and scientists have been searching for how the brain accomplishes that feat. DeAngelis's team believes it has discovered the answer in a small part of the brain that processes both the images from a single eye and also the motion of our bodies.

DeAngelis said that means the brain uses a whole array of methods to gauge depth. In addition to two-eyed "binocular disparity," the brain makes use of other cues such as motion, perspective, and how objects pass in front of or behind each other to create a representation of the 3-D world in our minds.

The findings could eventually help instruct children who were born with misalignment of the eyes to restore more normal functions of binocular vision in the brain; it could also help construct more compelling virtual reality environments someday, since we have to know exactly how our brains construct 3-D percepts to make virtual reality as convincing as possible, DeAngelis said.

The new neural mechanism is based on the fact that objects at different distances move across our vision with different directions and speeds, due to a phenomenon called motion parallax, DeAngelis said in a statement.

"When staring at a fixed object, any motion we make will cause things nearer than the object to appear to move in the opposite direction, and more distant things to appear to move in the same direction. To figure out the real 3-D layout of a scene," DeAngelis said, "the brain needs one more piece of information, and it pulls in this information from the motion of the eyeball itself."

He said neurons in the middle temporal area of the brain are combining visual information and physical movement to extract depth information, and, the motion of near and far objects can be confused. But if the eye is moving while tracking the overall movement of the group of objects, it gives the middle temporal neurons enough information to grasp that objects moving across the scene in the same direction as the head must be far away, whereas objects moving in the opposite direction must be close by.

"We use binocular disparity, occlusion, perspective and our own motion all together to create a representation of the real, 3-D world in our minds," said DeAngelis.

The research was conducted in collaboration with Jacob W. Nadler and Dora E. Angelaki, at Washington University and was funded by the National Institutes of Health. The findings were published in the March 20 online issue of the journal Nature.

For more information, visit: www.rochester.edu

Explore related content from Photonics Media




LATEST NEWS

Terms & Conditions Privacy Policy About Us Contact Us

©2024 Photonics Media