Close

Search

Search Menu
Photonics Media Photonics Buyers' Guide Photonics EDU Photonics Spectra BioPhotonics EuroPhotonics Industrial Photonics Photonics Showcase Photonics ProdSpec Photonics Handbook
More News
SPECIAL ANNOUNCEMENT
2016 Photonics Buyers' Guide Clearance! – Use Coupon Code FC16 to save 60%!
share
Email Facebook Twitter Google+ LinkedIn Comments

Depending on the kindness of strangers

Photonics Spectra
Feb 2010
Lynn M. Savage, Features Editor, lynn.savage@photonics.com

With the help of hundreds of people you’ve never met, you might be able to get a grand view of a tourist destination you’ve never visited.

Imaging science has led to ways to create detailed recordings of famous landmarks, including the Colosseum in Rome and the Eiffel Tower in Paris. Such recordings provide exquisite three-dimensional views but require painstaking measurement via laser or acoustic systems, and expensive high-end cameras. Now, however, a team of researchers at the University of Washington (UW) in Seattle has devised a 3-D image reconstruction method that pulls photographs from an Internet-based repository and stitches them together with unheralded speed.


Composites courtesy of the University of Washington.

Acting under Sameer Agarwal, assistant professor of computer science and engineering, the group processed hundreds of thousands of snapshots taken by tourists in and around Rome and Venice, Italy, as well as in Dubrovnik, Croatia. The photos were found by searching for the cities’ names on flickr.com, a public photo-hosting site owned by Yahoo Inc.

For example, to create representations of Venice – some of which are shown here – the UW scientists downloaded 250,000 images taken by a disparate group of tourists. Despite differences in cameras and lenses used, viewing angles, backgrounds, lighting and other factors, the software developed by the team pieced together richly detailed models of the entire city, not just of individual buildings and features.

Agarwal and his colleagues processed the multitude of snapshots with a series of 62 computer nodes, each comprising dual quad-core chips. The algorithms they designed searched first for the most likely pairs of matching points between individual images. After these likely pairs were identified, the software then refined the matching – including dimensional information – and stitched the images together. With the computing power on hand, they completed the matching and reconstruction process in only about 65 hours.


Comments
Terms & Conditions Privacy Policy About Us Contact Us
back to top

Facebook Twitter Instagram LinkedIn YouTube RSS
©2016 Photonics Media
x Subscribe to Photonics Spectra magazine - FREE!