Search
Menu
Cognex Corp. - Smart Sensor 3-24 GIF LB

Imaging System Navigates Fog Better Than Human Vision

Facebook X LinkedIn Email
In what could be a boon for autonomous vehicle technology, a system has been developed that can produce images of objects enveloped in fog too thick for human vision to penetrate. Under actual fog conditions, researchers from MIT estimate the system could provide visibility of between 30 and 50 m.

The imaging system uses a time-of-flight camera, which fires ultrashort bursts of laser light into a scene and measures the time it takes their reflections to return.

Because the patterns produced by fog-reflected light vary according to the fog’s density, researchers developed a computational framework to estimate the fog properties and distinguish between background photons reflected from the fog and signal photons reflected from the target.

“We’re dealing with realistic fog, which is dense, dynamic and heterogeneous. It is constantly moving and changing, with patches of denser or less-dense fog. Other methods are not designed to cope with such realistic scenarios,” said researcher Guy Satat.

The camera counts the number of photons that reach it every 56 ps. The raw photon counts are used to produce a bar graph showing the counts for each interval. Gamma distribution is used to match thicknesses of fog to statistical patterns.

Guy Satat, MIT Media Lab, led study on computational imaging system for navigating through fog.
Guy Satat, a graduate student in the MIT Media Lab, led the study on imaging through fog. Courtesy of Melanie Gonick, MIT.

The system finds the gamma distribution that best fits the shape of the bar graph and subtracts the associated photon counts from the measured totals. What remain are slight spikes at the distances that correlate with physical obstacles.

The MIT system estimates the values of the gamma distribution variables on the fly and uses the resulting distribution to filter fog reflection out of the light signal that reaches the time-of-flight camera’s sensor.

The system calculates a different gamma distribution for each of the 1024 pixels in the sensor. Each pixel can image a different fog variation, giving the system significant flexibility to handle almost any variation in fog density.

Researchers tested the system on a range of fog densities created in a fog chamber. They showed that the system was able to resolve images of objects and gauge their depth at a range of 57 cm when the human visibility was 36 cm.

Meadowlark Optics - Building system MR 7/23

The MIT imaging system was able to resolve images of objects and gauge their depth at a range of 57 centimeters.
The MIT system was able to resolve images of objects and gauge their depth at a range of 57 cm. Courtesy of Melanie Gonick, MIT.

Optical depth is independent of distance, so the performance of the system in fog that has an optical depth at a range of 1 m should be a good predictor of its performance in fog that has the same optical depth at a range of 30 m. The system could potentially perform even better at longer distances, as the differences between photons’ arrival times would be greater.

“If you look at the computation and the method, it’s surprisingly not complex. We also don’t need any prior knowledge about the fog and its density, which helps it to work in a wide range of fog conditions,” said Satat.

An inability to handle misty driving conditions has been one of the chief obstacles to the development of autonomous vehicular navigation systems that use visible light, which are preferable to radar-based systems for their high resolution and ability to read road signs and track lane markers.

“Bad weather is one of the big remaining hurdles to address for autonomous driving technology,” said professor Srinivasa Narasimhan of Carnegie Mellon University. “Guy [Satat] and professor Ramesh Raskar’s innovative work produces the best visibility enhancement I have seen at visible or NIR wavelengths and has the potential to be implemented on cars very soon.”

The research will be presented at the International Conference on Computational Photography, May 4-6, 2018, Carnegie Mellon University, Pittsburgh.


MIT researchers developed an imaging system that can gauge the distance of objects shrouded in fog so thick that human vision cannot penetrate it. Such a system could facilitate the development of reliable autonomous vehicular navigation systems. Courtesy of Melanie Gonick, MIT.

Published: March 2018
Research & TechnologyeducationAmericasImagingSensors & Detectorsautomotiveindustrialcamerasinfrared camerasautonomous vehiclesAutonomous drivingautonomous vehicle safetycomputational imagingTech Pulse

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.