Search
Menu
Videology Industrial-Grade Cameras - Custom Embedded Cameras LB 2024

Trillion fps Video: Streak Camera Stops Light

Facebook X LinkedIn Email
CAMBRIDGE, Mass., Dec. 13, 2011 — A novel streak camera that captures images in picosecond increments now makes it possible to stop not just a horse in mid-canter or a bullet piercing an apple, but light particles themselves as they traverse a scene.

Researchers in the MIT Media Lab created the camera, which can acquire data at a rate of 1 trillion exposures per second. That hyperfast rate produces a slow-motion video of a burst of light traveling the length of a one-liter soda bottle, bouncing off the cap and reflecting back toward the bottle’s bottom. The work follows in the footsteps of Stanford University’s Eadweard Muybridge, whose 19th-century photographic technique first showed the stages of a horse’s gallop, and of MIT’s own Harold Eugene “Doc” Edgerton, whose 120 strobe flashes per second helped capture the iconic image of a projectile puncturing a whole apple.

Media Lab postdoc Andreas Velten, one of the system’s developers, calls it the “ultimate” in slow motion: “There’s nothing in the universe that looks fast to this camera,” he said.


A new ultrafast imaging system developed at MIT differs from other high-speed imaging systems in that it can capture light scattering below the surfaces of solid objects, such as the tomato depicted here. (Image: Di Wu and Andreas Velten, MIT Media Lab)

The system relies on streak camera technology, although it is deployed in an unexpected way. The aperture of the streak camera is a narrow slit. Photons enter the camera through the slit and pass through an electric field that deflects them in a direction perpendicular to the slit. Because the electric field is changing very rapidly, it deflects late-arriving photons more than it does early-arriving ones.

The image produced by the camera therefore is two-dimensional, but only one dimension — the one corresponding to the direction of the slit — is spatial. The other dimension, corresponding to the degree of deflection, is time. The image thus represents the time of arrival of photons passing through a one-dimensional slice of space.

The camera was intended for use in experiments where light passes through, or is emitted by, a chemical sample. Since chemists are interested chiefly in the wavelengths of light that a sample absorbs, or in how the intensity of the emitted light changes over time, the fact that the camera registers only one spatial dimension is irrelevant.

But it’s a serious drawback in a video camera. To produce their super-slow-motion videos, Velten and his colleagues — Media Lab associate professor Ramesh Raskar and professor of chemistry Moungi Bawendi — must perform the same experiment repeatedly, continually repositioning the streak camera to gradually build up a two-dimensional image. It takes only a nanosecond for light to traverse the bottle, for example, but it takes about an hour to collect all the data necessary for the final video. For that reason, Raskar calls the new system “the world’s slowest fastest camera.”

BAE Systems Sensor Solutions - Fairchild - FS Sensor Solutions 4/24 MR


Media Lab postdoc Andreas Velten, left, and associate professor Ramesh Raskar with the experimental setup they used to produce slow-motion video of light scattering through a plastic bottle. (Photo: M. Scott Brauer)

After an hour, the researchers accumulate hundreds of thousands of data sets, each of which plots the one-dimensional positions of photons against their times of arrival. Raskar, Velten and other members of Raskar’s Camera Culture group at the Media Lab developed algorithms that can stitch that raw data into a set of sequential two-dimensional images.

Because the ultrafast-imaging system requires multiple passes to produce its videos, it can’t record events that aren’t precisely repeatable. Any practical applications will probably involve cases where the way in which light scatters is itself a source of useful information. Those cases may, however, include analyses of the physical structures of manufactured materials and biological tissues — “like ultrasound with light,” Raskar said.

If the event is not repeatable, the group reported, the required signal-to-noise ratio would make it nearly impossible to capture the event. Instead, the researchers exploit the fact that the photons statistically will trace the same path in repeated pulsed illuminations. Careful synchronization of the pulsed light with the capture of reflected light allows them to record the same pixel at the exact same relative time slot millions of times to accumulate sufficient signal. The resulting time resolution is 1.71 ps, so any activity spanning less than 0.5 mm in size would be difficult to record.

Raskar also sees a potential application in the development of better camera flashes. “An ultimate dream is, How do you create studiolike lighting from a compact flash? How can I take a portable camera that has a tiny flash and create the illusion that I have all these umbrellas, and sport lights and so on?” asked Raskar. “With our ultrafast imaging, we can actually analyze how the photons are traveling through the world. And then we can recreate a new photo by creating the illusion that the photons started somewhere else.”

For more information, visit: www.mit.edu



Video: Melanie Gonick

Published: December 2011
Glossary
video
Referring to the bandwidth and spectrum location of the signal produced by television or radar scanning.
AmericasAndreas VeltenBasic SciencecamerasConsumerImagingMassachusettsMasschusetts Institute of TechnologyMITMIT Media LabMoungi Bawendipicosecond imagingRamesh RaskarResearch & TechnologyStreak Camerastrillion frame per second videoultrafast imagingVideo

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.