STANFORD, Calif., Nov. 8 -- Computer scientists are bringing photographic technology into sharper focus.
Ren Ng, a Stanford University computer science graduate student in the lab of Pat Hanrahan, the Canon USA professor in the school of engineering, has developed a "light field camera" capable of producing photographs in which subjects at every depth appear finely tuned. Adapted from a conventional camera, the light field camera overcomes low-light and high-speed conditions that often plague photograph and foreshadows potential improvements to scientific microscopy, security surveillance, and sports and commercial photography.
"Currently, cameras have to make decisions about the focus before taking the exposure, which engineering-wise can be very difficult," said Ng. "With the light field camera, you can take one exposure, capture a lot more information about the light and make focusing decisions after you've already taken the shot. It is more flexible."
The light field camera, sometimes referred to as a "plenoptic camera," looks
Rodin's "Burghers of Calais," at the Stanford Quad. This image and other examples at http://graphics.stanford.edu/papers/lfcamera/refocus/ are linked to short video clips that illustrate refocusing of each scene at different depths. Each clip was processed from a single shot of the Stanford team's light field camera prototype. (Photo courtesy Ren Ng. Copyright ©Ren Ng, 2004–2005)
and operates exactly like an ordinary handheld digital camera. The difference lies inside. In a conventional camera, rays of light are corralled through the camera's main lens and converge on the film or digital photosensor directly behind it. Each point on the resulting 2-D photo is the sum of all the light rays striking that location.
The light field camera adds an additional element -- a microlens array -- inserted between the main lens and the photosensor. Resembling the multifaceted compound eye of an insect, the microlens array is a square panel composed of nearly 90,000 miniature lenses. Each lenslet separates back out the converged light rays received from the main lens before they hit the photosensor and changes the way the light information is digitally recorded. Custom processing software manipulates this "expanded light field" and traces where each ray would have landed if the camera had been focused at many different depths. The final output is a synthetic image in which the subjects have been digitally refocused.
Expanding the light field demands that the rules of traditional photography be tweaked. Ordinarily, a tradeoff exists between aperture size, which determines the amount of light reaching the film or photosensor, and depth of field, which determines which objects in an image will be sharp and which will be fuzzy. As the aperture size increases, more light passes through the lens and the depth of field shallows -- bringing into focus only the nearest objects and severely blurring the surrounding subjects.
The light field camera decouples aperture size and depth of field. The microlens array harnesses the additional light to reveal the depth of each object in the image and project tiny, sharp subimages onto the photosensor. The blurry halo typically surrounding the centrally focused subject is "un-blurred." In this way, the benefits of large apertures -- increased light, shorter exposure time, reduced graininess -- can be exploited without sacrificing the depth of field or sharpness of the image.
Extending the depth of field while maintaining a wide aperture may provide significant benefits to several industries, such as security surveillance. Often mounted in crowded or dimly lit areas, such as congested airport security lines and backdoor exits, monitoring cameras notoriously produce grainy, indiscernible images.
"Let's say it's nighttime and the security camera is trying to focus on something," said Ng. "If someone comes and they are moving around, the camera will have trouble tracking them. Or if there are two people, whom does it choose to track? The typical camera will close down its aperture to try capturing a sharp image of both people, but the small aperture will produce video that is dark and grainy."
The idea behind the light field camera is not new. With the roots of its conception dating back nearly a century, several variants of the light field camera have been devised over the years, each with slight variations in its optical system. Other models that rely on refocusing light fields have been slow and bulky and have generated gaps in the light fields, known as aliasing. Ng's camera -- compact and portable with drastically reduced aliasing -- displays greater commercial utility.
Marc Levoy, professor of computer science and electrical engineering; Mark Horowitz, the Yahoo! Founders Professor in the School of Engineering; Mathieu Bredif, MS '05 in computer science; and Gene Duval, BS '75, MS '78 in mechanical engineering and founder of Duval Design, also contributed to this work.
The research was supported by the Office of Technology Licensing Birdseed Fund, which provides small grants for the prototype development of unlicensed technologies. A manuscript detailing the theoretic performance of the light field camera appeared in Transactions on Graphics, published by the Association for Computing Machinery in July, and subsequently was presented at the 2005 ACM SIGGRAPH (Special Interest Group on Computer Graphics and Interactive Techniques) conference in August in Los Angeles.
For more information, visit: http://graphics.stanford.edu/papers/lfcamera/