Close

Search

Search Menu
Photonics Media Photonics Buyers' Guide Photonics EDU Photonics Spectra BioPhotonics EuroPhotonics Industrial Photonics Photonics Showcase Photonics ProdSpec Photonics Handbook
More News
SPECIAL ANNOUNCEMENT
2016 Photonics Buyers' Guide Clearance! – Use Coupon Code FC16 to save 60%!
share
Email Facebook Twitter Google+ LinkedIn Comments

New Chip: Better Digital Pics For Less Power

Photonics.com
Dec 2005
ROCHESTER, N.Y., Dec. 21 -- A pair of newly patented digital imaging and compression technologies developed at the University of Rochester may soon enable imaging chips to need just a fraction of the energy used today and also capture better images -- all while enabling cameras to shrink to the size of a shirt button and run for years on a single battery.

The researchers say that, placed in a home, the chips could wirelessly provide images to a security company when an alarm is tripped, or even allow mapping software like Google's to zoom in to real-time images at street level. The enormous reduction in power consumption and increase in computing power could also bring video cell phone calls closer to fruition.


Mark Bocko (left) and Zeljko Ignjatovic of the University of Rochester Department of Electrical and Computer Engineering show off their prototype chip that digitizes an image at each pixel. (University of Rochester photo)
The team of Mark Bocko, professor of electrical and computer engineering, and Zeljko Ignjatovic, assistant professor of electrical and computer engineering, has designed a prototype chip that can digitize an image at each pixel, and they are working now to incorporate a second technology that will compress the image with far fewer computations than the best compression techniques available today.

"These two technologies may work together or separately to greatly reduce the energy cost of capturing a digital image," says Bocko. "One is evolutionary in that it pushes current technology further. The second may prove to be revolutionary because it's an entirely new way of thinking about capturing an image in the first place."

The first technology being developed integrates an oversampling "sigma-delta" analog-to-digital converter at each pixel location in a CMOS sensor. Previous attempts at on-pixel conversion have required far too many transistors, leaving too little area to collect light. The new designs use as few as three transistors per pixel, reserving nearly half of the pixel area for light collection.

First tests on the chip show that, at video rates of 30 fps, it uses just 0.88 nanowatts per pixel -- 50 times less than the industry's previous best. It also bests conventional chips in dynamic range, which is the difference between the dimmest and brightest light it can record. Existing CMOS sensors can record light 1000 times brighter than their dimmest detectable light, a dynamic range of 1:1000, while the Rochester technology already demonstrates a dynamic range of 1:100,000.

Traditional image sensors use an array of light-sensitive diodes to detect incoming light, and transistors located at each photodiode amplify and transmit the signal to an analog-to-digital converter located outside of the photodiode array. Other designs can convert the signal to digital at the pixel site, but require high precision transistors, which take up considerable chip space at each pixel and reduce the amount of surface area on the chip devoted to receiving light. The new design not only uses smaller transistors at each pixel, and thus can allow more light to be detected, but the transistors can be scaled down in size without diminishing the sensor performance as advances in semiconductor fabrication technologies allow the size of transistors to shrink. This means that much denser, higher-resolution chips can be developed without the prohibitive problems of the existing sensor designs. When transistors are reduced in size, they also become faster, allowing incoming light to be sampled more frequently and accurately.

Bocko and Ignjatovic say that what makes their method work so well is its feedback design. Traditional CMOS image detectors apply a voltage to charge up a photodiode, and incoming light triggers a release of some of that charge. An amplifying transistor then checks the remaining voltage on the diode, and the diode is recharged again. Bocko and Ignjatovic's design also begins with a charged photodiode that discharges when light reaches it, but the discharge is then measured against a one/zero threshold and the resulting bit is delivered off the chip. If the result of a measurement is a one, then a packet of charge is fed back to the diode, effectively recharging it. The design also uses significantly less power than existing sensor designs, which is especially important in smaller devices like cell phones and digital cameras where battery size is restricted.

The second advance -- focal plane image compression -- has taken many researchers by surprise, according to Bocko and Ignjatovic. The two men have devised a way to arrange photodiodes on an imaging chip so that compressing the resulting image demands as little as 1 percent of the computing power usually needed.

Normally, the light-detecting diodes on a chip are arranged in a regular grid -- for example, 1000 x 1000 pixels. A picture is snapped and each diode records the light hitting it. A computer in the camera then runs complex computations to compress the image so that instead of taking up 10 MB, it might only take up 100 kB. The common picture type JPEG, used on the Web and on many cameras and phones, is one example. This compression takes a tremendous amount of computing power, and hence battery power.

Ignjatovic and Bocko came up with a way to make the physical layout of the light-sensitive diodes simplify the computation. Normally compression includes a computation called the discrete cosine transform, which checks how much a segment of an image resembles a series of cosine waves. Both the image and the cosine waves are sampled at regular intervals and the transform requires that both samples be multiplied together and added. Since the cosine wave samples can have a value anywhere between -1 and +1, the computation requires multiplication by non-integers, which demands the bulk of the computing power.

But Ignjatovic and Bocko have laid out the pixels to lie at the peaks of cosine waves, resulting in a non-uniformly distributed array instead of an evenly spaced one. By using this method, the amount of computation required to compress the image is slashed by nearly fivefold. Since each pixel is positioned exactly where each cosine wave has a peak where the cosine value is one, multiplying isn't unnecessary. With no multiplication and only a little addition, the processor uses less power.

The engineers are now looking to build a prototype chip that incorporates both technologies into a single unit to see how much real-world processing power the designs will save. They plan to integrate the technology into wireless security cameras first.

"Wireless security cameras offer the perfect proving ground for these technologies," says Bocko. "These cameras need to capture, compress and transmit high quality images as quickly as they can without consuming precious battery power. As we develop the chips further, we'll look more into consumer cameras and cell phones to see how much battery and processing power we can save them as well."

For more information, visit: www.rochester.edu



GLOSSARY
camera
A light-tight box that receives light from an object or scene and focuses it to form an image on a light-sensitive material or a detector. The camera generally contains a lens of variable aperture and a shutter of variable speed to precisely control the exposure. In an electronic imaging system, the camera does not use chemical means to store the image, but takes advantage of the sensitivity of various detectors to different bands of the electromagnetic spectrum. These sensors are transducers...
chip
1. A localized fracture at the end of a cleaved optical fiber or on a glass surface. 2. An integrated circuit.
image
In optics, an image is the reconstruction of light rays from a source or object when light from that source or object is passed through a system of optics and onto an image forming plane. Light rays passing through an optical system tend to either converge (real image) or diverge (virtual image) to a plane (also called the image plane) in which a visual reproduction of the object is formed. This reconstructed pictorial representation of the object is called an image.
pixel
Contraction of "picture element." A small element of a scene, often the smallest resolvable area, in which an average brightness value is determined and used to represent that portion of the scene. Pixels are arranged in a rectangular array to form a complete image.
Comments
Terms & Conditions Privacy Policy About Us Contact Us
back to top

Facebook Twitter Instagram LinkedIn YouTube RSS
©2016 Photonics Media
x We deliver – right to your inbox. Subscribe FREE to our newsletters.