UK Agency Funds Active Vision Research
Daniel S. Burgess
The UK Engineering and Physical Sciences Research Council has awarded a three-year grant valued at approximately $470,000 to scientists at Oxford University in the UK for the further development of a dynamic approach to simultaneous localization and map building that employs active vision in a single handheld camera. The project expands on a prior grant from the agency, which resulted in the development of a proof-of-concept prototype.
The UK Engineering and Physical Sciences Research Council is funding efforts at Oxford University to develop a single-camera simultaneous localization and mapping system. The work has applications in robotics as well as in the fusion of live video with computer-generated effects, such as in this demonstration, in which virtual shelving and a virtual table have been added to video of a real kitchen. Courtesy of Andrew J. Davison.
Simultaneous localization and map building refers to a strategy in robotics wherein a system produces a representation of the surrounding environment that also establishes the position of the system within it. Typically, this has involved the use of 2-D mapping strategies employing sonar or laser rangefinders.
The Oxford scientists instead are investigating an active vision approach, in which a camera system selects persistent, relatively large features in its 3-D surroundings as landmarks that are continually evaluated with regard to their utility for navigation. To establish the depth of landmarks in the visual field, the system produces depth hypotheses based on an assumption of the dimensions of the space that are weighed against other hypotheses at subsequent time steps as the camera moves. With a few such comparisons, the distribution of probable depths can be approximated as Gaussian, and the landmark is initialized as a point in the 3-D map.
Beyond navigation in robotics, the work has potential applications in broadcast entertainment. With such a real-time understanding of the position of a camera, computer-generated image effects could be integrated with a live video feed to create, for example, virtual sets.
MORE FROM PHOTONICS MEDIA