Close

Search

Search Menu
Photonics Media Photonics Buyers' Guide Photonics EDU Photonics Spectra BioPhotonics EuroPhotonics Industrial Photonics Photonics Showcase Photonics ProdSpec Photonics Handbook
More News
share
Email Facebook Twitter Google+ LinkedIn Comments

‘Time-Folded Optics’ Can Enhance Ultrafast Camera Capabilities

Photonics.com
Aug 2018
CAMBRIDGE, Mass., Aug. 13, 2018 — Researchers from MIT have developed a way to capture images based on the timing of the light reflected inside the camera optics. The technique, which the researchers call “time-folded optics,” uses mirrors inside the lens system to reflect light signals. A set of semi-reflective parallel mirrors reduces, or “folds,” the focal length every time the light is reflected between the mirrors.

Instead of heading right to the sensor, the light signal first bounces back and forth between the mirrors arranged to trap and reflect light. In each of these 'round-trip' reflections, some light is captured by a sensor programmed to take an image at a specific time interval — for example, a 1-ns snapshot every 30 ns.
.
MIT researchers have developed novel photography optics, dubbed “time-folded optics.”
MIT researchers have developed novel photography optics, dubbed “time-folded optics,” that capture images based on the timing of reflecting light inside the lens, instead of the traditional approach that relies on the arrangement of optical components. Courtesy of B. Heshmat, M. Tancik, G. Satat, and R. Raskar.

Each time a light signal is reflected, the sensor captures a separate image. The result is a sequence of images, each corresponding to a different point in time and to a different distance from the lens.

Each round trip of light moves the focal point — that is, the point where the sensor is positioned to capture an image — closer to the lens. This allows the lens tube to be significantly condensed.

“When you have a fast sensor camera, to resolve light passing through optics, you can trade time for space,” said researcher Barmak Heshmat. “That’s the core concept of time folding. You look at the optic at the right time, and that time is equal to looking at it in the right distance. You can then arrange optics in new ways that have capabilities that were not possible before.”
‘Time-folded optics’ could open doors to new capabilities for time- or depth-sensitive cameras that are not possible with conventional camera optics.

To capture an image with the long focal length of a traditional lens, here is how “time-folded optics” would work: The first round trip of light would pull the focal point about double the length of the set of mirrors closer to the lens. Each subsequent round trip would bring the focal point closer still. Depending on the number of round trips, a sensor could end up being placed very near the lens.

By placing the sensor at a precise focal point, determined by total round trips, the camera can capture a sharp final image, as well as different stages of the light signal, each coded at a different time, as the signal changes shape to produce the image. The first few shots will be blurry, but after several round trips the target object will come into focus, according to the researchers.

Using time-folded optics, the MIT researchers were able to compress the lens tube by an order of magnitude, while still capturing an image of the scene.

To demonstrate their technique, the researchers imaged an fs light pulse through a mask engraved with “MIT” and set 53 cm away from the lens aperture. Whereas a traditional 20-cm focal length lens would have to be placed about 32 cm from the sensor to capture this image, the time-folded optics pulled the image into focus after five round trips, with only a 3.1-cm lens-to-sensor distance.

Researchers next imaged two patterns spaced about 50 cm from each other, but each within line of sight of the camera. An “X” pattern was 55 cm from the lens, and a “II” pattern was 4 cm from the lens.

They rearranged the optics — including placing the lens between the two mirrors — to shape the light in a way that caused each round trip of light to create a new magnification in a single image acquisition. When they shot the laser into the scene, they achieved two separate, focused images created in one shot: the X pattern captured on the first round trip, and the II pattern captured on the second round trip.

The team then demonstrated an ultrafast multispectral camera. They designed two color-reflecting mirrors and a broadband mirror, one tuned to reflect one color and set closer to the lens, and one tuned to reflect a second color and set farther back from the lens. Researchers imaged a mask with an “A” and a “B,” with the “A” illuminated by the second color and the “B” illuminated by the first color, both for a few tenths of a ps.

When the light traveled into the camera, wavelengths of the first color immediately reflected back and forth in the first cavity, and the time was clocked by the sensor. Wavelengths of the second color, however, passed through the first cavity into the second, slightly delaying their time to the sensor.

Because the researchers knew which wavelength would hit the sensor at which time, they then overlaid the respective colors onto the image — the first wavelength was the first color, and the second was the second color. This capability could be used in depth-sensing cameras, which currently only record IR.

Heshmat believes that by tweaking the cavity spacing or by using different types of cavities, sensors, and lenses, the new technique could be applied to many different optics designs.

“The core message is that when you have a camera that is fast, or has a depth sensor, you don’t need to design optics the way you did for old cameras,” he said. “You can do much more with the optics by looking at them at the right time.”

The research was published in Nature Photonics (doi:10.1038/s41566-018-0234-0).



With the rapid development of faster frame rates and more light-sensitive camera sensors, why aren't camera lenses quickly evolving as well? Researchers at the MIT Media Lab's Camera Culture group are addressing that question with a new way to utilize time and space to shrink camera lenses while adding new functions such as selective zoom range and enhanced spectral resolution. Courtesy of the MIT Media Lab.

educationAmericasimaginglight sourcesopticsSensors & Detectorsultrafast imagingultrafast laserslensesmirrorscamerasCommunicationsoptical sensorsultrafast photonicsMITMassachusetts Institute of TechnologyBarmak HeshmatMIT Media Labcamera sensorsResearch & Technologytime-folded optics

Comments
Terms & Conditions Privacy Policy About Us Contact Us
back to top
Facebook Twitter Instagram LinkedIn YouTube RSS
©2018 Photonics Media, 100 West St., Pittsfield, MA, 01201 USA, info@photonics.com

Photonics Media, Laurin Publishing
x We deliver – right to your inbox. Subscribe FREE to our newsletters.
We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.