Search
Menu
Deposition Sciences Inc. - Difficult Coatings - LB - 8/23

3D Data Captured with 2D Camera

Facebook X LinkedIn Email
With a few modifications and expanded processing capabilities, standard digital cameras can extract detailed 3D information from a single exposure.

Developed at Duke University, the method does not sacrifice 2D still-image quality. Meanwhile, it could lead to improved image stabilization and autofocusing capabilities in future cameras.

Traditional 3D imaging relies on the principle of parallax, in which images and scenes are recorded with two slightly offset lenses. This recording process, however, requires twice as much data as a 2D image, making 3D photography and video more bulky, expensive and data intensive.

"Real scenes are in three dimensions and they're normally captured by taking multiple images focused at various distances," said Patrick Llull, a graduate student at Duke. "A variety of single-shot approaches to improve the speed and quality of 3D image capture have been proposed over the past decades. Each approach, however, suffers from permanent degradations in 2D image quality and/or hardware complexity."

The experimental setup (a) and visualizations of a scene as observed by the prototype camera when focused at various depths (b-d).
The experimental setup (a) and visualizations of a scene as observed by the prototype camera when focused at various depths (b-d). Courtesy of The Optical Society/
Optica.

To unlock the 3D potential of a 2D camera, the researchers programmed it to perform three functions simultaneously: sweeping through the focus range with the sensor, collecting light over a set period of time in a process called integration, and activating the stabilization module.

CASTECH INC - High Precision CNC Polished Aspherical Lenses

As the optical stabilization is engaged, it wobbles the lens to move the image relative to a fixed point. This, in conjunction with a focal sweep, integrates that information into a single measurement in a way that preserves image details while granting each focus position a different optical response.

The images that otherwise would have been acquired at various focal settings are directly encoded into this measurement based on where they reside in the depth of field. This effectively creates a depth map that describes the focus position of each pixel in the image.

Finally, the image and depth map are processed with a commercial 3D graphics engine. The resulting image can be used to determine the optimal focal setting for subsequent full-resolution 2D shots — as an autofocus algorithm does, but from only one image. Additionally, synthetic refocusing may be used on the resulting 3D imagery to display the scene as viewed at different depths of focus.

The researchers used a comparatively long exposure time to compensate for the setup of their equipment. To emulate the workings of a camera, a beamsplitter was necessary to control the deformable lens. This extra step sacrificed about 75 percent of the light received.

"When translated to a fully integrated camera without a beamsplitter, this light loss will not be an issue and much faster exposure times will be possible," Llull said.

The research was published in Optica (doi: 10.1364/optica.2.000822).

Published: September 2015
cameraslensesResearch & TechnologyAmericasNorth CarolinaDuke UniversityImagingcomputational imaging3D imagingPatrick LlullOpticsTech Pulse

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.