Email
Menu News Products Video Webinars White Papers EDU

One Photon Per Pixel Produces 3-D Lidar Image

CAMBRIDGE, Mass., Dec. 4, 2013 — With the detection of a single photon from each pixel location, a new lidarlike system can gauge depth and produce 3-D images. The technique is expected to be broadly applicable to microscopy and remote sensing.

Lidar rangefinders are commonly used in applications such as surveying and autonomous vehicle control. The system gauges depth by emitting short bursts of laser light and measuring the time it takes for the reflected photons to arrive back and be detected.

The new lidarlike system, developed at MIT's Research Laboratory of Electronics (RLE), requires 100 times less light to infer depth, meaning it could result in substantial savings in energy and time, both of which are at a premium in autonomous vehicles trying to avoid collisions.


 An example of a conventional lidar scanner that gauges depth by emitting laser light and measuring the time it takes for photons to rebound. MIT researchers developed a lidarlike system that can gauge depth by measuring only a single reflected photon. Photo courtesy of Wikipedia.


The system can also use the same reflected photons to produce images of a quality that a conventional imaging system would require 900 times as much light to match — and it works much more reliably than lidar in bright sunlight, when ambient light can yield misleading readings. All the hardware it requires can already be found in commercial lidar systems; the new system just deploys that hardware in a manner more in tune with the physics of low-light-level imaging and natural scenes.

The very idea of forming an image with only a single photon detected at each pixel location is counterintuitive, said Ahmed Kirmani, a graduate student in MIT's Department of Electrical Engineering and Computer Science. "The way a camera senses images is through different numbers of detected photons at different pixels," he said. "Darker regions would have fewer photons, and therefore accumulate less charge in the detector, while brighter regions would reflect more light and lead to more detected photons and more charge accumulation."

In conventional lidar, the laser fires light pulses toward a sequence of discrete positions, which collectively form a grid; each location in the grid corresponds to a pixel in the final image. The technique, known as raster scanning, is how old cathode-ray-tube televisions produced images, illuminating one phosphor dot on the screen at a time.

The laser will generally fire a large number of times at each grid position, until it gets consistent enough measurements between the times at which pulses of light are emitted and reflected photons are detected that it can rule out the misleading signals produced by stray photons. The MIT system, by contrast, fires repeated bursts from each position in the grid only until it detects a single reflected photon; then it moves on to the next position.

A highly reflective surface should yield a detected photon after fewer bursts than a less-reflective surface would. So the new system produces an initial, provisional map of the scene based simply on the number of times the laser has to fire to get a photon back.

The photon registered by the detector could, however, be stray photodetection generated by background light. Fortunately, the false readings produced by such ambient light can be characterized statistically; they follow a pattern known in signal processing as “Poisson noise.”

Instead of simply filtering out noise according to the Poisson statistics to produce an intelligible image, the new system guides the filtering process by assuming that adjacent pixels will, more often than not, have similar reflective properties and will occur at approximately the same depth. That assumption enables the system to filter out noise in a more principled way.

Kirmani developed the computational imager together with his adviser, Vivek Goyal, a research scientist in RLE, and other members of Goyal’s Signal Transformation and Information Representation Group. Researchers in the Optical and Quantum Communications Group and senior research scientist Franco Wong ran the experiments, which contrasted the new system’s performance with that of a conventional lidar system.

“They’ve used a very clever set of information-theoretic techniques to extract a lot of information out of just a few photons, which is really quite incredible, and they’ve been able to do it in the presence of a lot of background noise, which is also impressive,” said John Howell, a professor of physics at the University of Rochester. “Another thing that’s really fascinating is that they’re also getting intensity information out of a single photon, which almost doesn’t make sense.”

Howell believes that the technique could be broadly applicable. “There are many situations in which you are light-starved,” he said. “That could mean that you have a light source that’s weak, or it could be that you’re interrogating a biological sample, and too much light could damage it. Our eyes are a very good example of this, but other biological systems are the same. There could also be remote-sensing applications where you may want to look at something, but you don’t want to give away that you’re illuminating that area.”

The work appears in Science (doi: 10.1126/science.1246775); Kirmani is lead author on the paper.

For more information, visit: mit.edu  


The ability to post comments on Photonics.com is one benefit of a FREE Photonics.com membership.

Please login or register, for FREE, to post comments:

Login Register


Facebook Twitter RSS Mobile Apps