Close

Search

Search Menu
Photonics Media Photonics Buyers' Guide Photonics EDU Photonics Spectra BioPhotonics EuroPhotonics Industrial Photonics Photonics Showcase Photonics ProdSpec Photonics Handbook
More News
share
Email Facebook Twitter Google+ LinkedIn Comments

Multiview Video Streaming Emerges as a Form of Visual-Content Representation

Photonics.com
Mar 2017
SAARBRÜCKEN, Germany, March 31, 2017 — Multiview video streaming — the ability to view a recorded scene from different perspectives — could revolutionize the entertainment industry and the workplace. Possible applications range from consumer electronics, computer games and virtual worlds to realistic video conferences and autonomous driving.

Although the components of this technology work individually, so far there has been no fully integrated, functioning multiview video system. Researchers at Saarland University are working with the Intel Visual Computing Institute to change that.

With multiple-perspective multiview video viewing, a recorded scene can be continuously rendered from different viewpoints under interactive control of a user. The key obstacles to efficient multiview video system deployment are the lack of a unified system view, consumption scenarios and usage environments. Specifically, the main system requirement is to transmit, decode and render multiview video in real time and without visually disturbing artifacts. In addition, the system needs to operate efficiently on different networks and devices.

Saarland University researchers Tobias Lange and Thorsten Herfet.

Researching multi-view video streaming: Tobias Lange and Thorsten Herfet, Chair of the Telecommunications Lab at Saarland University. Courtesy of Oliver Dietze.

“The data rate is ludicrous. Even now we need such a high bandwidth that most contemporary Internet connections would be overloaded,” said researcher Tobias Lange.

The researchers are tackling this challenge by improving the production process step by step. In order to minimize delay, they have developed a solution based on two components. First, they implemented a server-side simulation of the streaming client’s buffer, which provided a low-delay feedback for the rate selection. Then, they designed a hybrid rate adaption logic based on both the estimated throughput and the buffer information, stabilizing the adaptive response to the dynamics of the transport layer.

Results have shown that the researchers’ approach improves the user-perceived video quality for dynamic streaming with a delay as low as the video-chunk duration.

For encoding and decoding the data in an acceptable time span, the researchers rely on a distributed approach. Each camera has a mini-computer of its own attached to it.

Additionally, the researchers have expanded parameters for multiview video streaming to make full use of the opportunities offered by the presence of multiple views of the same scene. This includes the capability to drop certain views from the transmission when the available bandwidth is limited, instead of reducing the overall video quality. The missing views can be reconstructed with slightly lower quality on the receiver using the remaining views. To enable use of this feature, they developed an algorithm that is able to determine the optimal choice of views to transmit under a given bandwidth constraint and the number of required views on the client.

With these incremental improvements, the researchers have potentially created a comprehensive working system for multiview video streaming.

The research was presented at the CeBIT 2017 Computer Fair in Hannover, Germany, March 20-24, 2017. Additional information about the project is available on the Intel Visual Computing site.  

Research & TechnologyEuropeeducationimagingmultiview video streamingcameras

Comments
Terms & Conditions Privacy Policy About Us Contact Us
back to top

Facebook Twitter Instagram LinkedIn YouTube RSS
©2017 Photonics Media
x We deliver – right to your inbox. Subscribe FREE to our newsletters.