Search Menu
Photonics Media Photonics Marketplace Photonics Spectra BioPhotonics EuroPhotonics Vision Spectra Photonics Showcase Photonics ProdSpec Photonics Handbook

Algorithm Makes Video Panoramas from Unstructured Camera Arrays

Facebook Twitter LinkedIn Email
Even nonprofessionals may someday be able to create high-quality video panoramas using multiple cameras with the help of a new algorithm.

The method smooths out blurring, ghosting and other distortions that routinely occur when video feeds from unstructured camera arrays are combined to create a single panoramic video.

The algorithm corrects for parallax – the apparent difference in position of an object caused by different camera angles – and image warping that occurs because of slight timing differences between cameras. Both parallax and image warping lead to visible discontinuities, ghosting and other imperfections seen in existing approaches.

A team of researchers demonstrated the technique using as many as 14 cameras, generating panoramic video in the order of tens to more than 100 megapixels.

"We can foresee a day when just about anyone could create a high-quality video panorama by setting up a few video cameras or even linking several smartphones, just as many people today can easily create a still photo panorama with their smartphones," said Alexander Sorkine-Hornung, a senior research scientist at Disney Research Zurich, who collaborated with colleagues at ETH Zurich and Walt Disney Imagineering on the study.

Fourteen machine vision cameras were used to create one of the arrays used in the study.
Fourteen machine vision cameras were used to create one of the arrays used in the study. An algorithm created panoramas without particularly accurate placement of the cameras. Courtesy of Disney Research.

Though some professional methods using calibrated camera arrays do exist for creating video panoramas, the Disney team focused on combining videos from multiple cameras that have overlapping visual fields, but are not precisely positioned and are not perfectly synchronized.

Their technique automatically analyzes the images from the cameras to estimate position and alignment of each camera, which eliminates the need for calibration and allows flexible positioning of the cameras. 

The algorithm corrects for differences in parallax that create ghosting and other disturbing effects in the areas of the panorama where images from separate cameras are stitched together. It also detects and corrects for image warping – wavy lane markings on roads or buildings that appear to bend over – that occurs when images are stitched together. The technique also compensates for slight differences in the timing of frames between cameras, which otherwise would cause jitter and other artifacts in the image.

Funding came from the Swiss National Science Foundation. The findings are to be presented at Eurographics 2015, the annual conference of the European Association for Computer Graphics.

For more information, visit
May 2015
machine vision
Interpretation of an image of an object or scene through the use of optical noncontact sensing mechanisms for the purpose of obtaining information and/or controlling machines or processes.
A precisely defined series of steps that describes how a computer performs a task.
camerasindustrialmachine visionResearch & TechnologyEuropeSwitzerlandDisney ResearchETH Zurichimagingopticscomputational imagingalgorithmAlexander Sorkine-HornungTechnology News

Submit a Feature Article Submit a Press Release
Terms & Conditions Privacy Policy About Us Contact Us
Facebook Twitter Instagram LinkedIn YouTube RSS
©2021 Photonics Media, 100 West St., Pittsfield, MA, 01201 USA, [email protected]

Photonics Media, Laurin Publishing
x We deliver – right to your inbox. Subscribe FREE to our newsletters.
We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.