Researchers Improve ToF Resolution 1000-Fold Using Computational Approach

Facebook X LinkedIn Email
CAMBRIDGE, Mass., Dec. 28, 2017 — In a novel approach to time-of-flight (ToF) imaging, researchers correlated the principle of heterodyning to ToF, increasing depth resolution 1000-fold. Higher ToF resolution could enable accurate distance measurements through fog, removing a major obstacle to the development of self-driving cars.

At a range of 2 m, existing ToF systems have a depth resolution of about 1 cm. In contrast, a new ToF system developed by a team at MIT has a depth resolution of 3 µm at distances of 2 m.

Computational approach to time of flight imaging, MIT Camera Culture Group.

MIT researchers present a new scheme of heterodyne lidar that uses a cascaded stack of telecommunication modulators to allow time of flight 3D imaging at micrometer range precision. Courtesy of Achuta Kadambi and Ramesh Raskar, MIT.

With ToF imaging, light-burst length is one factor that determines system resolution. A burst of light is fired into a scene, and a camera measures the time it takes to return, which indicates the distance of the object that is reflecting the light.

Detection rate also determines resolution. According to the researchers, today’s detectors can make only about 100 million measurements per second, which limits existing ToF systems to centimeter-scale resolution.

To improve resolution, the researchers looked at interferometry, in which a light beam is split in two and half of it is kept circulating locally while the other half is fired into a visual scene. The reflected sample beam is recombined with the locally circulated light, and the difference in phase between the two beams yields a precise measurement of the distance that the sample beam has traveled. However, interferometry requires careful synchronization of the two light beams — a difficult achievement for a moving vehicle.

“You could never put interferometry on a car because it’s so sensitive to vibrations,” researcher Achuta Kadambi said. “We’re using some ideas from interferometry and some of the ideas from lidar, and we’re really combining the two here.”

Kadambi said that the MIT system also borrows from principles of acoustics. For example, if two voices are singing at different pitches, the interplay between these tones will produce a third tone, with a frequency that is the difference between the first two.

The same is true with light pulses. If a ToF imaging system is firing light into a scene at the rate of a billion pulses a second, and the returning light is combined with light pulsing 999,999,999 times a second, the result will be a light signal pulsing once a second — a rate easily detectable with a commodity video camera. That slow “beat” will contain all the phase information necessary to gauge distance.

Rather than trying to synchronize two high-frequency light signals, as an interferometry system would do, the team modulated the returning signal using the same technology that produced it in the first place — in other words, they pulsed the already-pulsed light.

“The fusion of the optical coherence and electronic coherence is very unique,” said professor Ramesh Raskar. “We’re modulating the light at a few gigahertz, so it’s like turning a flashlight on and off millions of times per second. But we’re changing that electronically, not optically. The combination of the two is really where you get the power for this system.”

Gigahertz optical systems are better at compensating for fog than lower-frequency systems. Fog is problematic for ToF systems because it deflects the returning light signals so that they arrive late and at odd angles. Trying to isolate a true signal when light is scattered is too computationally challenging to do on the fly, said the researchers.

With low-frequency systems, scattering causes a slight shift in phase. But with high-frequency systems, the phase shift is much larger relative to the frequency of the signal. Scattered light signals arriving over different paths will actually cancel each other out — the troughs of one wave will align with the crests of another. According to the researchers, theoretical analyses performed at the University of Wisconsin and Columbia University suggest that this cancellation could be widespread enough to make identifying a true signal much easier.

To test their approach, the team sent a light signal through 500 meters of optical fiber before feeding the signal to the MIT-developed system. The fiber had regularly spaced filters along its length to simulate the power falloff incurred over longer distances.

Results from the team’s experiment showed depth sensing at 3-µm resolution and at Hertz-level framerates, and robustness to extreme vibrations. The team believes that these results could be used to develop applications for obtaining high-quality 3D scans in uncontrolled environments, and that the study could provide a foundation for future work in computational imaging, where cascaded modulating elements are incorporated into the correlation ToF architecture.

The research was published in IEEE Explore (doi: 10.1109/ACCESS.2017.2775138).

Published: December 2017
machine vision
Machine vision, also known as computer vision or computer sight, refers to the technology that enables machines, typically computers, to interpret and understand visual information from the world, much like the human visual system. It involves the development and application of algorithms and systems that allow machines to acquire, process, analyze, and make decisions based on visual data. Key aspects of machine vision include: Image acquisition: Machine vision systems use various...
time of flight
(TOF) The length of time needed for a signal to arrive at and be reflected from the target. The basis of an active autoranging/autofocus system.
computer graphics
Computer output in the form of pictorial representation (graphs, charts, drawings, etc.) that is displayed visually.
Lidar, short for light detection and ranging, is a remote sensing technology that uses laser light to measure distances and generate precise, three-dimensional information about the shape and characteristics of objects and surfaces. Lidar systems typically consist of a laser scanner, a GPS receiver, and an inertial measurement unit (IMU), all integrated into a single system. Here is how lidar works: Laser emission: A laser emits laser pulses, often in the form of rapid and repetitive laser...
In optical communications, the translation of optical signals into radio signals, lowering their frequency in detection from greater than 1014 Hz to less than 1010 Hz, so that further signal processing can be done with conventional circuitry, yielding improved receiver sensitivity and selectivity.
Optoelectronics is a branch of electronics that focuses on the study and application of devices and systems that use light and its interactions with different materials. The term "optoelectronics" is a combination of "optics" and "electronics," reflecting the interdisciplinary nature of this field. Optoelectronic devices convert electrical signals into optical signals or vice versa, making them crucial in various technologies. Some key components and applications of optoelectronics include: ...
Research & TechnologyeducationAmericasImagingmachine visiontime of flightTOFcamerascomputer graphicslidarinterferometersautomotivedefenseheterodyningoptoelectronics

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.