Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics Buyers' Guide Photonics Handbook Photonics Dictionary Newsletters Bookstore
Latest News Latest Products Features All Things Photonics Podcast
Marketplace Supplier Search Product Search Career Center
Webinars Photonics Media Virtual Events Industry Events Calendar
White Papers Videos Contribute an Article Suggest a Webinar Submit a Press Release Subscribe Advertise Become a Member


The Relightables System Captures Character Lighting for Virtually Any Environment

Computer scientists at Google have developed a system for high-quality, relightable performance capture. The volumetric capture system, called The Relightables, can capture full-body reflectance of 3D human performances and seamlessly blend them into a new environment through augmented reality (AR) or into digital scenes in films and games. Character lighting can be customized in real time.

The team designed a novel active depth sensor to capture 12.4-MP depth maps. It then designed a hybrid geometric and machine learning reconstruction pipeline to process the high-resolution input and produce a volumetric video. The researchers then generated temporally consistent reflectance maps for dynamic performers by leveraging the information contained in two alternating color gradient illumination images acquired at 60 Hz.


Computer scientists at Google have created a new, comprehensive system that is able to capture full-body reflectance of 3D human performances and seamlessly blend them into the real world through AR or into digital scenes in films, games, and more. Courtesy of SIGGRAPH Asia.

The Google team demonstrated The Relightables on subjects that were recorded inside a custom geodesic sphere that was outfitted with 331 custom color LED lights, an array of high-resolution cameras, and a set of custom high-resolution depth sensors. The system was able to capture about 65 GB per second of raw data from nearly 100 cameras. Its computational framework enabled data to be effectively processed at this scale.

The Relightables system can capture the reflectance information on a person — that is, the way lighting interacts with the person’s skin. It can record people while they move freely within the volume, making it possible to relight their animation in arbitrary environments. The system combines the ability to realistically relight humans for arbitrary environments with the benefits of free-viewpoint volumetric capture and new levels of geometric accuracy for more dynamic performances, the researchers said.

Historically, cameras have recorded people from a single viewpoint and lighting condition. The Relightables allows the user to record a person and then view the person from any viewpoint and lighting condition, eliminating the need for a green screen to create special effects, and allowing for more flexible lighting conditions.

The interactions of space, light, and shadow between the performer and the environment are critical to creating a sense of presence. Beyond just cutting and pasting a 3D video capture, The Relightables system gives the user the ability to record a person and then seamlessly place that person into new environments — whether in their own space for AR experiences, or in a virtual reality experience, film, or game.

The Relightables system could significantly improve the level of realism achieved when volumetrically captured human performances are placed into arbitrary computer-generated scenes. The Google team planned to present the components of The Relightables system, from capture to processing to display, at ACM SIGGRAPH Asia, held Nov. 17-20, 2019, in Brisbane, Australia.

The research was published in ACM Transactions on Graphics (www.10.1145/3355089.3356571). 



The Relightables: volumetric performance capture of humans with realistic relighting. Courtesy of K. Guo et al.


Explore related content from Photonics Media




LATEST NEWS

Terms & Conditions Privacy Policy About Us Contact Us

©2024 Photonics Media