Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics Buyers' Guide Photonics Handbook Photonics Dictionary Newsletters Bookstore
Latest News Latest Products Features All Things Photonics Podcast
Marketplace Supplier Search Product Search Career Center
Webinars Photonics Media Virtual Events Industry Events Calendar
White Papers Videos Contribute an Article Suggest a Webinar Submit a Press Release Subscribe Advertise Become a Member


Algorithm Makes Video Panoramas from Unstructured Camera Arrays

Even nonprofessionals may someday be able to create high-quality video panoramas using multiple cameras with the help of a new algorithm.

The method smooths out blurring, ghosting and other distortions that routinely occur when video feeds from unstructured camera arrays are combined to create a single panoramic video.

The algorithm corrects for parallax – the apparent difference in position of an object caused by different camera angles – and image warping that occurs because of slight timing differences between cameras. Both parallax and image warping lead to visible discontinuities, ghosting and other imperfections seen in existing approaches.



A team of researchers demonstrated the technique using as many as 14 cameras, generating panoramic video in the order of tens to more than 100 megapixels.

"We can foresee a day when just about anyone could create a high-quality video panorama by setting up a few video cameras or even linking several smartphones, just as many people today can easily create a still photo panorama with their smartphones," said Alexander Sorkine-Hornung, a senior research scientist at Disney Research Zurich, who collaborated with colleagues at ETH Zurich and Walt Disney Imagineering on the study.


Fourteen machine vision cameras were used to create one of the arrays used in the study. An algorithm created panoramas without particularly accurate placement of the cameras. Courtesy of Disney Research.

Though some professional methods using calibrated camera arrays do exist for creating video panoramas, the Disney team focused on combining videos from multiple cameras that have overlapping visual fields, but are not precisely positioned and are not perfectly synchronized.

Their technique automatically analyzes the images from the cameras to estimate position and alignment of each camera, which eliminates the need for calibration and allows flexible positioning of the cameras. 

The algorithm corrects for differences in parallax that create ghosting and other disturbing effects in the areas of the panorama where images from separate cameras are stitched together. It also detects and corrects for image warping – wavy lane markings on roads or buildings that appear to bend over – that occurs when images are stitched together. The technique also compensates for slight differences in the timing of frames between cameras, which otherwise would cause jitter and other artifacts in the image.

Funding came from the Swiss National Science Foundation. The findings are to be presented at Eurographics 2015, the annual conference of the European Association for Computer Graphics.

For more information, visit www.disneyresearch.com.



Explore related content from Photonics Media




LATEST NEWS

Terms & Conditions Privacy Policy About Us Contact Us

©2024 Photonics Media