Search
Menu
Rocky Mountain Instruments - Custom Assemblies LB

Algorithm Makes Video Panoramas from Unstructured Camera Arrays

Facebook X LinkedIn Email
Even nonprofessionals may someday be able to create high-quality video panoramas using multiple cameras with the help of a new algorithm.

The method smooths out blurring, ghosting and other distortions that routinely occur when video feeds from unstructured camera arrays are combined to create a single panoramic video.

The algorithm corrects for parallax – the apparent difference in position of an object caused by different camera angles – and image warping that occurs because of slight timing differences between cameras. Both parallax and image warping lead to visible discontinuities, ghosting and other imperfections seen in existing approaches.



A team of researchers demonstrated the technique using as many as 14 cameras, generating panoramic video in the order of tens to more than 100 megapixels.

"We can foresee a day when just about anyone could create a high-quality video panorama by setting up a few video cameras or even linking several smartphones, just as many people today can easily create a still photo panorama with their smartphones," said Alexander Sorkine-Hornung, a senior research scientist at Disney Research Zurich, who collaborated with colleagues at ETH Zurich and Walt Disney Imagineering on the study.

Fourteen machine vision cameras were used to create one of the arrays used in the study.
Fourteen machine vision cameras were used to create one of the arrays used in the study. An algorithm created panoramas without particularly accurate placement of the cameras. Courtesy of Disney Research.

Though some professional methods using calibrated camera arrays do exist for creating video panoramas, the Disney team focused on combining videos from multiple cameras that have overlapping visual fields, but are not precisely positioned and are not perfectly synchronized.

Their technique automatically analyzes the images from the cameras to estimate position and alignment of each camera, which eliminates the need for calibration and allows flexible positioning of the cameras. 

The algorithm corrects for differences in parallax that create ghosting and other disturbing effects in the areas of the panorama where images from separate cameras are stitched together. It also detects and corrects for image warping – wavy lane markings on roads or buildings that appear to bend over – that occurs when images are stitched together. The technique also compensates for slight differences in the timing of frames between cameras, which otherwise would cause jitter and other artifacts in the image.

Meadowlark Optics - Building system MR 7/23

Funding came from the Swiss National Science Foundation. The findings are to be presented at Eurographics 2015, the annual conference of the European Association for Computer Graphics.

For more information, visit www.disneyresearch.com.


Published: May 2015
Glossary
machine vision
Machine vision, also known as computer vision or computer sight, refers to the technology that enables machines, typically computers, to interpret and understand visual information from the world, much like the human visual system. It involves the development and application of algorithms and systems that allow machines to acquire, process, analyze, and make decisions based on visual data. Key aspects of machine vision include: Image acquisition: Machine vision systems use various...
algorithm
A precisely defined series of steps that describes how a computer performs a task.
camerasindustrialmachine visionResearch & TechnologyEuropeSwitzerlandDisney ResearchETH ZurichImagingOpticscomputational imagingalgorithmAlexander Sorkine-HornungTechnology News

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.