Search
Menu
Meadowlark Optics - SEE WHAT

Lensless Camera Creates Detailed 3D Images Without Scanning

Facebook X LinkedIn Email
BERKELEY, Calif., Dec. 27, 2017 — A new camera, called DiffuserCam, produces 3D images from a single 2D image without using any lenses. The camera consists only of a diffuser placed a few millimeters in front of an image sensor. DiffuserCam integrates concepts from lensless camera technology and imaging through complex media with computational imaging design principles. The proposed architecture and algorithm for the camera could enable high-resolution, light-efficient lensless 3D imaging of large and dynamic 3D samples in an extremely compact package.

Lenseless Diffuser-cam, University of California, Berkeley.
The lensless DiffuserCam consists of a diffuser placed in front of a sensor (bumps on the diffuser are exaggerated in illustration). The system turns a 3D scene into a 2D image on the sensor. After a one-time calibration, an algorithm is used to reconstruct 3D images computationally. The result is a 3D image reconstructed from a single 2D measurement. Courtesy of Laura Waller, University of California, Berkeley.

Researchers from the University of California, Berkeley, used compressed sensing algorithms to reconstruct more voxels than pixels captured. By using a physical approximation and simple calibration scheme, they solved the large-scale inverse problem in a computationally efficient way. According to researchers, their approach allowed them to reconstruct several orders of magnitude more voxels than the number achieved in previous work.

“Our new camera is a great example of what can be accomplished with computational imaging — an approach that examines how hardware and software can be used together to design imaging systems,” said professor Laura Waller. “We made a concerted effort to keep the hardware extremely simple and inexpensive. Although the software is very complicated, it can also be easily replicated or distributed, allowing others to create this type of camera at home.”

Researchers demonstrated a prototype DiffuserCam system built entirely from commodity hardware. They reconstructed 3D objects on a grid of 100 million voxels, nonuniformly spaced, from a single 1.3 megapixel image. According to researchers, the reconstructions showed true depth sectioning, allowing them to generate 3D renderings of the sample.

A relative of the light field camera, DiffuserCam improves on traditional light field camera capabilities by using compressed sensing to avoid the loss of resolution that typically comes with microlens arrays.

“I wanted to see if we could achieve the same imaging capabilities using simple and cheap hardware," said Waller. “If we have better algorithms, could the carefully designed, expensive microlens arrays be replaced with a plastic surface with a random pattern such as a bumpy piece of plastic?”

Hamamatsu Corp. - Earth Innovations MR 2/24

Currently, the exact size and shape of the bumps in the new camera’s diffuser are unknown, and a few images of a moving point of light must be acquired to calibrate the software prior to imaging. The researchers are working on a way to eliminate this calibration step by using the raw data for calibration. They also want to improve the accuracy of the software and make the 3D reconstruction faster.

Lensless Diffuser-Cam, University of California, Berkeley.
The researchers used the DiffuserCam to reconstruct the 3D structure of leaves from a small plant. They plan to use the new camera to watch neurons fire in living mice without using a microscope. Courtesy of Laura Waller, University of California, Berkeley.

The new camera will be at the heart of a research project funded by DARPA’s Neural Engineering System Design program. Specifically, it will be used to watch neuron activity in mice in vivo without the aid of a microscope, imaging millions of neurons in one shot. Because the camera is lightweight and requires no microscope or objective lens, it can be attached to a transparent window in a mouse’s skull, allowing neuronal activity to be linked with behavior. Several arrays with overlaying diffusers could be tiled to image large areas.

The team believes that DiffuseCam could be used in remote diagnostics, mobile photography, in vivo microscopy and for other applications involving 3D capture.

“We think the camera could be useful for self-driving cars, where the 3D information can offer a sense of scale, or it could be used with machine learning algorithms to perform face detection, track people or automatically classify objects,” said Waller.

The open source software for the DiffuserCam is available on the project page: DiffuserCam: Lensless Single-exposure 3D Imaging.

The research was published in Optica, a publication of OSA, The Optical Society (doi: 10.1364/OPTICA.5.000001).

Published: December 2017
Research & TechnologyeducationAmericasImagingcamerasSensors & Detectorslenseslensless imagingcomputational imaging3D imagingcompressed imagingMicroscopyautomotiveindustrialDARPA Neural Engineering System Design Program

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.