Search
Menu
PowerPhotonic Ltd. - Coherent Beam 4/24 LB

Perfecting Digital Imaging

Facebook X LinkedIn Email
The best software and video cameras lag behind reality, unable to capture images that look exactly the way our eyes expect them to look. But new research in computer graphics could change all that, advancing artificial vision, 3-D displays and video editing.

In three separate papers presented this week at Siggraph 2013 in Anaheim, Calif., Hanspeter Pfister and Todd Zickler of the Harvard School of Engineering and Applied Sciences (SEAS) hope to narrow the gap between “virtual” and “real” by answering a common question: How do we see what we see?

Realistic soap
In the first of Zickler’s projects, the William and Ami Kuan Danoff Professor of Electrical Engineering and Computer Science set out to find better ways to mimic the appearance of a translucent object, such as a bar of soap. This could help clarify how humans perceive and recognize real objects and how software can exploit the details of that process to make the most realistic computer-rendered images possible. 

“If I put a block of butter and a block of cheese in front of you, and they’re the same color, and you’re looking for something to put on your bread, you know which is which,” Zickler said. “The question is, how do you know that? What in the image is telling you something about the material?”

His hope is to eventually understand these properties well enough to instruct a computer with a camera to identify what material an object is made of and to know how to properly handle it — how much it weighs or how much pressure to safely apply to it — the way humans do.


The subtleties in these computer-generated images of translucent materials are important. Texture, color, contrast and sharpness combine to create a realistic image. Courtesy of Ioannis Gkioulekas and Shuang Zhao.

The new approach focuses on translucent materials’ phase function, part of a mathematical description of how light refracts or reflects inside an object, and, therefore, how we see what we see.

In the past, phase function shape was considered relevant to an object’s translucent appearance, but formal perpetual studies had never been carried out because the space of different phase functions is so vast and perceptually diverse to the human brain that modern computational tools were required to generate and analyze so many different images.

Taking advantage of increased computational power, Zickler’s team trimmed the potential space of images to a manageable size. They first rendered thousands of computer-generated images of one object with different computer-simulated phase functions so that each image’s translucency was slightly different from the next. From there, a program compared each image’s pixel colors and brightness to another image in the space and decided how different the two images were.

Through this process, the software created a map of the phase function space according to the relative differences of image pairs, making it easy for the researchers to identify a much smaller set of images and phase functions that were representative of the whole space.

People were asked to compare the representative images and to judge how similar or different they were, shedding light on the properties that help us decide which objects are plastic and which are soap simply by looking at them.

“This study, aiming to understand the appearance space of phase functions, is the tip of the iceberg for building computer vision systems that can recognize materials,” Zickler said.

Next, the investigators hope to find ways to accurately measure a material’s phase functions instead of making them up computationally. Zickler’s team — including researchers from Harvard SEAS, MIT and Cornell University — has already begun making progress on this using a new system that will be presented at Siggraph Asia in December.

Gentec Electro-Optics Inc   - Measure Your Laser MR

Adaptive displays
A new type of screen hardware that displays different images when lit or viewed from various directions was the topic of Zickler’s second paper at the 40th International Conference and Exhibition on Computer Graphics and Interactive Techniques. 

By creating tiny grooves of varying depths across the screen’s surface, Zickler’s team created optical interference effects that cause the thin surface to look different when illuminated or viewed from different angles.


Computer science researchers at Harvard are improving computer graphics, display technologies and digital editing software. Courtesy of Ioannis Gkioulekas and Shuang Zhao.

The paper essentially asks, “If I know what appearances I want the screen to have, how do I optimize the geometric structure to get that?” Zickler said.

The solution can be found in bidirectional reflectance distribution mathematical functions that represent how light coming from a particular direction will reflect off a surface.

Previous attempts to control surface reflection for graphics applications have been accomplished only for surfaces displaying huge images that, for example, have pixels the size of a square inch, because their analyses did not account for interference effects. The new work, however, demonstrates that interference effects can be exploited to control reflection from a screen at micron scales using well-known photolithographic techniques.

In the future, this kind of optimization could enable multiview, lighting-sensitive displays, where a viewer rotating around a flat surface could perceive a 3-D object while looking at the surface from different angles, and where the virtual object would correctly respond to external lighting.

“Looking at such a display would be exactly like looking through a window," Zickler said.

The research was done in collaboration with colleagues from Harvard SEAS, the Weizmann Institute of Science and MIT.

Vivid color
The final paper, led by Pfister, tackled a problem in digital film editing. 

Color grading — editing a video to impose a particular color palette — has historically been a painstaking manual process requiring many hours’ work by skilled artists. Amateur filmmakers, therefore, cannot achieve the characteristically rich color palettes of professional films.

“The starting idea was to appeal to [a] broad audience, like the millions of people on YouTube,” said lead author Nicolas Bonneel, a postdoctoral researcher in Pfister’s group at SEAS.


Color-grading example. Courtesy of Nicolas Bonneel.

Pfister’s team hopes to make frame-by-frame editing unnecessary by creating software that lets users simply select, hypothetically, the Amélie look or the Transformers look. The computer would then apply that color palette to the user’s video via a few representative frames. The user has only to indicate where the foreground and background are in each frame, and the software does the rest, interpolating the color transformations throughout the video.

The color grading method could be incorporated into commercially available editing software within the next few years, Bonneel estimated.

Kalyan Sunkavalli and Sylvain Paris of Adobe Systems Inc. also contributed to the research.

The three papers will be published in ACM Transactions on Graphics.

For more information, visit: seas.harvard.edu

Published: July 2013
Glossary
reflection
Return of radiation by a surface, without change in wavelength. The reflection may be specular, from a smooth surface; diffuse, from a rough surface or from within the specimen; or mixed, a combination of the two.
refraction
The bending of oblique incident rays as they pass from a medium having one refractive index into a medium with a different refractive index.
Adobe SystemsAmericasbidirectional reflectance distribution functionscamerascolor gradingcomputer-rendered imagesConsumerCornell UniversityDisplaysHanspeter PfisterHarvard SEASImagingMassachusettsMITNew YorkNicolas Bonneeloptical interferenceOpticsreflectionrefractionResearch & TechnologySchool of Engineering and Applied SciencesSiggraph 2013Todd Zicklertranslucent objectsWeizmann Institute of Science

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.