UK Agency Funds Active Vision Research
Daniel S. Burgess
The UK Engineering and Physical Sciences Research Council has awarded a three-year grant valued at approximately $470,000 to scientists at Oxford University in the UK for the further development of a dynamic approach to simultaneous localization and map building that employs active vision in a single handheld camera. The project expands on a prior grant from the agency, which resulted in the development of a proof-of-concept prototype.
The UK Engineering and Physical Sciences Research Council is funding efforts at Oxford University to develop a single-camera simultaneous localization and mapping system. The work has applications in robotics as well as in the fusion of live video with computer-generated effects, such as in this demonstration, in which virtual shelving and a virtual table have been added to video of a real kitchen. Courtesy of Andrew J. Davison.
Simultaneous localization and map building refers to a strategy in robotics wherein a system produces a representation of the surrounding environment that also establishes the position of the system within it. Typically, this has involved the use of 2-D mapping strategies employing sonar or laser rangefinders.
The Oxford scientists instead are investigating an active vision approach, in which a camera system selects persistent, relatively large features in its 3-D surroundings as landmarks that are continually evaluated with regard to their utility for navigation. To establish the depth of landmarks in the visual field, the system produces depth hypotheses based on an assumption of the dimensions of the space that are weighed against other hypotheses at subsequent time steps as the camera moves. With a few such comparisons, the distribution of probable depths can be approximated as Gaussian, and the landmark is initialized as a point in the 3-D map.
Beyond navigation in robotics, the work has potential applications in broadcast entertainment. With such a real-time understanding of the position of a camera, computer-generated image effects could be integrated with a live video feed to create, for example, virtual sets.
LATEST NEWS
- Exail Signs LLNL Contract, Partners with Eelume
Apr 26, 2024
- Menlo Moves U.S. HQ: Week in Brief: 4/26/2024
Apr 26, 2024
- Optofluidics Platform Keys Label-, Amplification-Free Rapid Diagnostic Tool
Apr 25, 2024
- DUV Lasers Made with Nonlinear Crystals Enhance Lithography Performance
Apr 25, 2024
- Teledyne e2v, Airy3D Collaborate on 3D Vision Solutions
Apr 24, 2024
- One-Step Hologram Generation Speeds 3D Display Creation
Apr 24, 2024
- Innovation Award Winners for Laser Technology Honored in Aachen
Apr 23, 2024
- Intech 2024: AI Arrives on the Shop Floor
Apr 22, 2024