Search
Menu
Vescent Photonics LLC - Lasers, Combs, Controls 4/15-5/15 LB

3-D interior modeling: A virtual walk on the inside

Facebook X LinkedIn Email
A laser-equipped backpack worn by a human operator has provided 3-D modeling of interior sections of a campus building at the University of California, enabling a virtual walk through the academic halls.

Automated 3-D modeling of building interiors has possible virtual reality applications in entertainment, gaming, architecture, building energy management systems and even military reconnaissance. The technique of “virtualizing” interiors also could have applications in documenting historical sites, mapping hazardous areas and preparing for disaster management.

“Traditional indoor mapping systems are on wheeled robots or pushcarts which work on planar surfaces,” said Avideh Zakhor, a professor and lead researcher for the project at the university’s video and image processing lab.

There are distinct disadvantages to such systems, she added. “A human operator can ensure that all objects of interest within an indoor environment are properly captured. Today, a robot cannot offer that. Another important technological innovation is to localize in the absence of GPS signals. In outdoor modeling, GPS can readily be used to recover pose. For indoors, the GPS signal does not penetrate inside buildings and, therefore, other techniques have to be developed for indoor localization.”

Also, traditional indoor mapping systems recover only three degrees of freedom of movement: X, Y and yaw, Zakhor noted. “The ability to model complex environments such as staircases or uneven surfaces was one of our motivations at the outset, to come up with a human-operated backpack system, rather than a wheeled robot: A robot cannot do staircases, but a human operator can.”


A laser-equipped backpack for virtualizing building interiors is carried by a human operator. Images courtesy of VIP Lab, University of California, Berkeley.


The most important technological advances in the research thus far are automatic sensor fusion algorithms that localize the backpack accurately and that build texture-mapped 3-D models, she said. “These models can then be rendered for interactive virtual walkthroughs or ‘flythroughs’ of indoor environments. The localization is particularly difficult, since we need to recover six degrees of freedom of movement – that is, X, Y, Z, yaw, pitch and roll.”

The backpack apparatus includes a number of laser scanners and cameras and one inertial orientation measurement system (OMS). “In essence, the laser scanners serve a dual purpose,” Zakhor said. The scans from the 2-D laser scanners are used to localize the backpack by matching successive horizontal scans to recover yaw and successive vertical scans to recover roll or pitch. In addition, once the backpack is localized, the scans are used to create a 3-D point cloud of the environment, which is essentially the 3-D geometry.


Shown is a model of hallways located on two floors of a building on the University of California, Berkeley, campus. The data for the model was captured in a single run with the laser backpack, using a stairwell to move between floors.



BAE Systems Sensor Solutions - Fairchild - FS Sensor Solutions 4/24 MR
The camera imagery is used to texture-map the resulting 3-D models. It serves a dual purpose in that it is used to refine and reduce localization errors, Zakhor said. “In particular, camera imagery is used to automatically detect ‘loop closures’ – that is, places that the backpack has visited before. It turns out that such detections can be successfully used to drastically reduce the localization error due to laser scan matching and the OMS.”

The laser scanners and cameras and OMS are all fused via a number of algorithms to localize the backpack, she said. “Once the backpack is localized, we can stack the vertical laser scans together to generate a point cloud, which is then texture-mapped using cameras.”

The backpack has three orthogonally mounted laser scanners. “We apply scan matching algorithms to successive scans from each scanner to recover two translation and one rotation parameter,” she said. “For example, by applying scan matching to the horizontal scanner, we recover X, Y and yaw. Ditto for the two vertical scanners: One is used to recover X, Z and pitch, and the other, Y, Z and roll. We then combine these to recover all six degrees of freedom; that is, X, Y, Z, yaw, pitch and roll.”

The researchers are working constantly to improve the error performance of this system, which translates directly into more accurate localization and better-looking models.

“My guess is that it has to be tested a lot more extensively in more buildings before it can be put into use on a routine basis for military missions,” Zakhor said, adding that they also should and can streamline the overall system both algorithmically and architecturally.

“We have too many sensors and too many algorithms in action now. We need to systematically analyze to see which one of the many sensors we can discard without affecting the overall performance of the system,” she said.

She noted that rendering and visualization of the models are important considerations. “Our models are so detailed that commercial viewers are not able to render them in full detail. As such, we either have to develop simplification algorithms to make them work with existing off-the-shelf rendering algorithms or need to do our own custom-made renderers. Without rendering capability, it is impossible to interact and visualize the models we have worked hard to generate.”

A paper by Zakhor and her team, titled “Indoor Localization and Visualization Using a Human-Operated Backpack System,” was presented at the 2010 International Conference on Indoor Positioning and Indoor Navigation in September in Zürich, Switzerland.

Research leading to the development of the laser backpack for military applications was funded by the US Air Force Office of Scientific Research (AOFSR) in Arlington, Va., and the US Army Research Office in Durham, N.C. The backpack could enable military personnel to view a virtual building interior collectively and to interact over a network to achieve goals such as mission planning, according to an AOFSR press release.

Published: January 2011
Glossary
algorithm
A precisely defined series of steps that describes how a computer performs a task.
pitch
In positioning, rotation about an axis normal to the line of sight. Also known as attitude.
roll
In positioning, rotation about the line of sight or direction of travel.
yaw
In positioning, in-plane rotation about the vertical axis. Also known as azimuth.
3-D3-D modeling3-D point cloudalgorithmAOFSRarchitectureautomatic sensor fusion algorithmAvideh Zakhorbuilding energy managementcamerasCaren B. Lesdefensedisaster managementenergygaminigGPShazardous area mappinghistorical site documentationhuman-operated backpack systemimage processing labImagingindoor localizationindoor mappinginterior modelinglaser scannerslocalization errorloop closuresmappingmilitary reconnaissanceOMSorientation measurement systempitchResearch & TechnologyrobotrollSensors & DetectorsSwitzerlandTech Pulsetexture mappingthree dimensionalUniversity of California BerkeleyUS Air ForceUS Air Force Office of Scientific ResearchUS Army Research Officevirtual reality applicationsvirtual walkvirtualizingwalkthroughsyawLasers

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.