Search Menu
Photonics Media Photonics Marketplace Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics ProdSpec Photonics Handbook

Streak camera stops light for trillion-fps video

Facebook Twitter LinkedIn Email
Ashley N. Paddock, [email protected]

A novel streak camera that captures images in picosecond increments now makes it possible to stop not just a bullet piercing an apple or a horse in mid-canter, but light particles themselves as they traverse a scene.

The camera, created in MIT’s Media Lab, can acquire data at a rate of 1 trillion exposures per second. That hyperfast rate produces a slow-motion video of a burst of light traveling the length of a 1-liter soda bottle, bouncing off the cap and reflecting back toward the bottle’s bottom. The work follows in the footsteps of Stanford University’s Eadweard Muybridge, whose 19th-century photographic technique first showed the stages of a horse’s gallop, and of MIT’s own Harold Eugene “Doc” Edgerton, whose 120 strobe flashes per second helped capture the iconic image of a projectile puncturing a whole apple.

This is the ultimate in slow motion, said Media Lab postdoc Andreas Velten, one of the system’s developers. “There’s nothing in the universe that looks fast to this camera,” he said.

Media Lab postdoc Andreas Velten, left, and associate professor Ramesh Raskar with the experimental setup they used to produce slow-motion video of light scattering through a plastic bottle. Courtesy of M. Scott Brauer.

The system relies on streak camera technology, although it is deployed in an unexpected way. The aperture of the streak camera is a narrow slit. Photons enter the camera through the slit and pass through an electric field that deflects them in a direction perpendicular to the slit. Because the electric field is changing very rapidly, it deflects late-arriving photons more than it does early-arriving ones.

The image produced by the camera is therefore two-dimensional; however, only one dimension – the one corresponding to the direction of the slit – is spatial. The other dimension, corresponding to the degree of deflection, is time. The image thus represents the time of arrival of photons passing through a one-dimensional slice of space.

The camera was intended for use in experiments where light passes through, or is emitted by, a chemical sample. Because chemists are interested chiefly in the wavelengths of light that a sample absorbs, or in how the intensity of the emitted light changes over time, the fact that the camera registers only one spatial dimension is irrelevant.

But that’s a serious drawback in a video camera. To produce their super-slow-motion videos, Velten and his colleagues – Media Lab associate professor Ramesh Raskar and professor of chemistry Moungi Bawendi – must perform the same experiment repeatedly, continually repositioning the streak camera to gradually build up a two-dimensional image. It takes only a nanosecond for light to traverse the bottle, for example, but it takes about an hour to collect all the data necessary for the final video. For that reason, Raskar calls the new system “the world’s slowest fastest camera.”

After an hour, the researchers accumulate hundreds of thousands of data sets, each of which plots the one-dimensional positions of photons against their times of arrival. Raskar, Velten and other members of Raskar’s Camera Culture group at the Media Lab developed algorithms that can stitch that raw data into a set of sequential two-dimensional images.

Because the ultrafast imaging system requires multiple passes to produce its videos, it can’t record events that aren’t precisely repeatable. Any practical applications will probably involve cases where the way in which light scatters is itself a source of useful information. Those cases may, however, include analyses of the physical structures of manufactured materials and biological tissues – “like ultrasound with light,” Raskar said.

An ultrafast imaging system developed at MIT differs from other high-speed imaging systems in that it can capture light scattering below the surfaces of solid objects, such as the tomato depicted here. Courtesy of Di Wu and Andreas Velten, MIT Media Lab

If the event is not repeatable, the group reported, the required signal-to-noise ratio would make it nearly impossible to capture the event. Instead, the researchers exploit the fact that the photons statistically will trace the same path in repeated pulsed illuminations. Careful synchronization of the pulsed light with the capture of reflected light allows them to record the same pixel at the exact same relative time slot millions of times to accumulate sufficient signal. The resulting time resolution is 1.71 ps, so any activity spanning less than 0.5 mm in size would be difficult to record.

Raskar also sees a potential application in the development of better camera flashes. “An ultimate dream is: How do you create studiolike lighting from a compact flash? How can I take a portable camera that has a tiny flash and create the illusion that I have all these umbrellas and sport lights and so on?” Raskar said. “With our ultrafast imaging, we can actually analyze how the photons are traveling through the world, and then we can recreate a new photo by creating the illusion that the photons started somewhere else.”

Photonics Spectra
Mar 2012
camerasAmericasAndreas VeltenBasic ScienceConsumerimagingMassachusettsMIT Media LabMoungi Bawendipicosecond imagingRamesh RaskarStreak CamerasTech PulseResearch & Technologyultrafast imaging

back to top
Facebook Twitter Instagram LinkedIn YouTube RSS
©2023 Photonics Media, 100 West St., Pittsfield, MA, 01201 USA, [email protected]

Photonics Media, Laurin Publishing
x Subscribe to Photonics Spectra magazine - FREE!
We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.