A new ultrafast time-of-flight imaging technique uses reflections from a nonmirrored surface to recover 3-D shapes hidden from sight, essentially allowing the camera to capture images around corners. Scientists at MIT’s Media Lab recovered 60 images from an ultrafast camera using 60 femtosecond laser positions to produce recognizable 3-D images of a wooden figurine and foam cutouts outside the device’s line of sight. The findings could lead to imaging systems that allow emergency responders to evaluate dangerous environments or to vehicle navigation systems that can negotiate blind turns. The instrument also could be used with endoscopic medical devices to produce images of previously obscure regions of the body. The study appeared in Nature Communications (doi: 10.1038/ncomms1747). Top photo, the experimental setup with the hidden object. Courtesy of Christopher Barsi and Andreas Velten, MIT Media Lab. Bottom left, the image captured around the corner shows a projection of a 3-D confidence map of the reconstructed volume. The scientists used ultrafast illumination and imaging to analyze scattered background light. They computationally reconstructed this image from the data. Courtesy of Velten et al, MIT. Bottom right, a sketch describing the MIT concept. Courtesy of Tiago Allen. Femtosecond lasers formerly were used to produce extremely high speed images of biochemical processes in laboratory settings, where the pulses’ trajectories were carefully controlled. “Four years ago, when I talked to people in ultrafast optics about using femtosecond lasers for room-size scenes, they said it was totally ridiculous,” said Ramesh Raskar, an associate professor at MIT and leader of the new research. “It has been, and still is, difficult to get imaging information at these speeds,” Andreas Velten, a former postdoctoral associate in Raskar’s group, told Photonics Spectra. “We expect emerging technologies to make this easier in the near future.” To recover the images of the obscured wooden figurine, the scientists fired short bursts of light from a Ti:sapphire laser toward an opaque screen. The light reflected off the opaque panel, then bounced around the staged room and re-emerged, striking the camera detector, which took measurements every few picoseconds. Because the light bursts are so short, the system can gauge how far the light has traveled by measuring the time it takes to reach the detector. Light bursts were fired several more times at several angles on the screen. The data collected from the ultrafast sensor was processed by algorithms developed by the scientists. The team’s image-reconstruction algorithm uses a technique called filter backprojection, which is the basis of CAT scans. Although blurry, the 3-D images were easily recognizable. The reconstruction quality may change, however, if there are multiple objects in the room. In the experiments, Raskar’s group found that problems associated with peering around a corner are similar to using multiple antennas to determine the direction of incoming radio signals. The team hopes to use this insight to improve the image quality that the system produces and to enable it to handle more cluttered visual scenes. “Reconstruction quality does depend on scene complexity to some degree,” Velten said. “Whether or not multiple objects can be distinguished depends on the resolution of the system at that given point. The arms and torso of the mannequin in our publication [are] an example of close surfaces that are still separated.” At this time, it is not possible to recover moving objects, but after the system is optimized for speed, reconstructions should be possible in a few seconds’ time. Beyond that, resolution would have to be sacrificed, Velten said. “Collision avoidance would require extremely low resolution, since we only need to know if there is something around the corner and not what,” he added. The present setup cannot be moved outside of the lab; however, it would be easy to build a more compact, power-efficient setup that could be moved and operated in such conditions without chilled water or high-voltage outlets, Velten said. Just how far away can the setup be from the object to be imaged? “This is an interesting topic for further research,” he said. “The possible dimensions depend on the desired reconstruction resolution, the size of the available wall, the distance between the wall and the object, the distance between the wall and the camera and laser, the scene complexity, the laser power and the signal-to-noise ratio of the detector.” Next, the team plans to improve the setup and algorithm and to develop new hardware solutions to test the method over a variety of application scenarios. In a related side project, which appeared on p. 22 of the March 2012 issue of Photonics Spectra, the scientists captured movies of light in motion at a 2-ps time resolution. “Modifying our detection setup allows us to record virtual trillion-frames-per-second movies of light interacting with tabletop scenes,” Velten said.