Search
Menu
Cognex Corp. - Smart Sensor 3-24 GIF LB

Camera’s Exposures Always Correct

Facebook X LinkedIn Email
Dan Drollette

The human eyeball can handle an enormous range of brightness levels. It can look at a darkly tanned person sitting on the brilliant white sand of a beach in glaring midday sunlight and correctly distinguish that person's features, all while taking in the entire scene. But cameras -- especially digital ones -- cannot, unless lights, filters, reflectors and other equipment are used to compensate.


In the glare of daylight, photographers must choose between overexposure (top) or underexposure (middle) of this clay oven. The system under development marries the best renditions from each (bottom), producing an image that retains the subtle shadings and surface details, down to the logs inside the oven.

Because a camera by itself can handle only a limited range, some part of the scene must be overexposed ("washed out," in photographer's parlance) or underexposed. Either our person on the beach will come out as a human floating in white space or the picture will show perfectly rendered sand with a dark, human-shaped blob in the middle.

But computer scientist Shree K. Nayar of Columbia University's Computer Vision Laboratory thinks he has a solution. Essentially, he has come up with an approach that simultaneously takes four different digital exposures of the same scene and combines them into one properly exposed version.

Hamamatsu Corp. - Earth Innovations MR 2/24

The key lies in the millions of tiny, light-sensitive picture elements, or pixels, that make up every picture. Usually, all pixels are equal in the way they respond to incoming photons. But by placing a mask, for instance, with cells of different optical transparencies over the detector array, some pixels are made to be more sensitive to the dark end of the scale while others give better renditions at the light end. This pattern can be manipulated so that any group of four neighboring pixels contains a complete range of responses to light.

With this organization, at least one pixel is always able to capture a photon. "If something is completely saturated, there's a good chance that the pixel next to it is not," said Nayar. A set of computer algorithms compares this information with the responses of the surrounding pixels and compensates to produce the best available hard copy. In a sense, the system works like a digital version of photographer Ansel Adams' famed zone system, shifting the limited range of responses to produce a print that most closely represents what the eye sees.

Nayar said that the prototype system works in color as well as black and white, and applies to all imagery, regardless of whether it is recorded on film, digital, still, video, x-ray, infrared, magnetic resonance imaging or synthetic aperture radar systems. He is still determining the best hardware to use to make the arrangement of pixels and is refining the algorithms.

He expects it to be on the market in one to two years.

Published: December 2000
Research & TechnologySensors & DetectorsTech Pulse

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.