Picturing the Perfect CMOS?
When every photon counts, researchers count on CCD sensors. But CMOS sensors -- and their backers -- can't be entirely discounted when it comes to high-end applications.
With larger pixel sizes, improved noise reduction, bigger arrays and other techniques, vendors are producing CMOS sensors that rival CCDs. They offer lower power consumption, greater readout speed, smaller system size and other desirable characteristics. A look at the technology reveals when — and how — CMOS sensors will be used in research-grade cameras and other demanding applications.
In the great image sensor war, there have been some surprising twists. Sometimes these two competing technologies have swapped places. For instance, CCD imagers are appearing in cell phone cameras — a high-volume, low-cost application that is supposedly ideal for CMOS. On the other hand, CMOS sensors have recently debuted in professional- and research-grade cameras — applications that had been the domain of CCD-based solutions.
Such innovations as larger pixel sizes, lower dark current, better signal-to-noise ratio and bigger arrays are improving the image quality and capabilities of CMOS sensors. The performance gap between CMOS and CCD is closing, and that should continue for one important reason. “There is a lot more money going into R&D on CMOS sensors,” said Brian O’Rourke, a senior analyst with the Scottsdale, Ariz.-based technology market research firm In-Stat/MDR. He said he had no hard statistics to back up this assertion, but based his claim instead on his observations of the industry and on the number of companies actively engaged in CMOS sensor development.
Some of the firms involved include Tokyo’s Canon Inc., Dialog Semiconductor plc of Kirchheim, Germany, Eastman Kodak Co. of Rochester, N.Y., FillFactory NV of Mechelen, Belgium, Micron Technology Inc. of Boise, Idaho, and Geneva-based STMicroelectronics. There’s even a foundry — Tower Semiconductor Ltd. of Migdal Haemek, Israel — making imaging chips that others design.
Tower Semiconductor’s technology allows sensor segments to be stitched together to create very large CMOS imagers. This picture of a stitched sensor was captured using another of the same type of sensor.
Several of these companies have subsidiaries or divisions in the US: Canon in Lake Success, N.Y., STMicroelectronics in Carrollton, Texas, and Tower in the heart of Silicon Valley, San Jose, Calif. Most make only CMOS imagers, but Kodak is active in both CCD and CMOS arenas.
Kodak manufactures CCD imagers at a facility in Rochester. Its CMOS products are designed in-house and fabricated by others using a custom Kodak processing recipe. Helen Titus, marketing manager for Kodak’s Image Sensor Solutions Group, said that today she would recommend CCD for precision applications. An example might be one where a decision must be made based on analysis of an image captured by a microscope camera. Such issues as dark current, fixed pattern noise and low light levels all would work against CMOS as the best choice.
However, with ongoing research and development, the situation is not static, and the age of the two competing technologies bodes well for CMOS. As do others, Titus predicts that the performance gap will continue to shrink as improvements continue to be made.
“I think there is still room for CCD improvements as well, but I think those improvements can be faster with CMOS, just because of the maturity of where we are,” she said.
FillFactory developed this sensor for Eastman Kodak’s CL14.
Even with current technology, CMOS sensors are being used in demanding applications. One such is a new camera built around a custom-designed FillFactory CMOS sensor. The device, which is being marketed by SciMedia Ltd. of Tokyo, is called the MiCAMultima camera. The sensor covers 10 mm on a side, has 100 × 100-μm pixels and scans at 10,000 fps. The camera is intended for neurological imaging and was developed by Michinori Ichikawa, head of the brain-operative device laboratory of the Riken Brain Science Institute in Saitama, Japan.
These images of a slice of a cortex of a rat’s brain were taken by Japanese researchers at the Riken Brain Institute using a CMOS sensor (top) from FillFactory. The sensor enabled them to capture a typical propagating pattern along the entorhinal cortex (top panel) and the hippocampus (lower panel). The researchers use voltage-sensitive dye to visualize invisible electrical potential.
Riken is one of the largest academic natural science organizations in Japan, with offices and operations scattered throughout the country. The institute has a staff of nearly 500 and a yearly budget of more than ¥10 billion ($83 million). Its device lab has two main missions: to develop a computer that works like the brain and to develop equipment for measuring brain activities. It is in response to this that Ichikawa designed and developed the camera, which is being produced by his company, Brainvision Inc. of Tokyo.
Ichikawa wrote that the camera “is designed for optical imaging with fluorescent dyes, such as membrane voltage-sensitive dye, calcium ion ratio indicators and so on, for in vivo, tissue-slice and single-cell-level samples in the neuroscience and brain-science field.”
In the past, Ichikawa had used a CCD-based system for the optical detection of electrical changes in brain cells. This presented problems because the imaging system had to be both fast — in the submillisecond range — and able to record small changes, ranging from a 0.01 to a 1 percent alteration in the total signal. That combination is hard to achieve with a CCD-based approach.
So he turned to CMOS, whose biggest advantage, he noted, is the high saturation level made possible by the large pixel size and the readout speed. The latter is a consequence of the architecture used in CMOS sensors. Unlike their CCD counterparts, CMOS sensors can actively select and fetch data from a particular pixel or set of pixels.
The output of the CMOS active pixel sensor is amplified and sent to parallel analog-to-digital converter (ADC) modules. Each module has a 14-bit, 10-MHz converter, a field-programmable gate array (FPGA) and 32-MB memory (MEM) for temporarily storing images. A fast interface handles data transfer to a PC. Courtesy of Riken Brain Institute.
According to Ichikawa, he will soon publish results obtained with the new camera. “We succeeded in recording single-cell neuron activity under high-magnification microscopy. It might be the first recording of visualized real signal propagation in single neurons,” he said.
As for the future, he is continuing his research and looking for more applications of CMOS imaging technology. He also is looking into the development of a low-cost, high-performance, high-speed camera. This will be made easier by the deployment of high-speed interfaces that can handle high data rates, the implementation of shuttering schemes to avoid blurring, and further enhancements in the CMOS sensors themselves.
Engineers at Weinberger Vision Technology in Grand Blanc, Mich., recorded this high-speed sequence at 1000 fps using a Speedcam Visario CMOS-based camera and a 15-μs electronic shutter rate.
“CMOS image sensor technology will continuously be improved. One of the key issues is further reduction of the dark current,” said Lou Hermans, vice president of sales and marketing at FillFactory. “The challenge is to optimize design and processes in order to achieve good image sensor performance.”
Then again, current CMOS technology is good enough for the latest professional cameras. One of these is from Kodak, the DCS Pro 14n, which uses a customized 13.85-megapixel CMOS image sensor from FillFactory. The sensor has an 8-μm pixel size and a total light-sensitive area of 36 × 24 mm. According to Hermans, the main sensor innovation was the use of FillFactory’s patented N-well pixel high-fill-factor technology, which expands the photoelectron collecting layer below the sensor’s active circuitry. This turns most of the silicon on the chip into a light-sensitive area and does so using standard CMOS processing. Consequently, the sensor has higher sensitivity, leading to shorter exposure times and smaller pixel sizes.
Kodak isn’t the only company making professional cameras based on CMOS sensors. Canon recently introduced the EOS 10D, which uses a 15.1 × 22.7-mm, 6.3-megapixel CMOS sensor designed, developed and manufactured entirely within the company. Chuck Westfall is the director of the technical information department of the camera division for Canon USA. He said that one reason why CMOS is showing up in single-lens reflex cameras, which are characterized by interchangeable lenses and the need to match 35-mm-camera formats, has to do with pixel size.
Canon’s single-lens reflex cameras, such as the EOS-1Ds, depend on CMOS sensors that have performance comparable to that of CCD imagers.
“The size of the individual pixel on the sensor has a great deal to do with the quality of the signal that you are able to extract out of it,” he said.
Consumer-grade cameras, Westfall noted, typically have small pixels — averaging roughly 3 μm per side — and small sensors. Digital single-lens reflex cameras, on the other hand, are designed to match the existing 35-mm format. Consequently, single-lens reflex sensors usually have pixels from 7.5 to 10.5 μm on a side. The increase in pixel surface area boosts the signal per pixel in these cameras compared with their consumer counterparts by a factor of about 10-to-1. In addition, single-lens reflex sensor areas are bigger than those of their consumer cousins. Together, the bigger sensors and pixels mean that the image quality is better. This is particularly true for high ISO settings, which might be used in lower-light or fast-action circumstances. This confluence of bigger sensors, larger pixels and improved image quality all work to the advantage of CMOS.
The use of CMOS sensors in these cameras is one reason why the prices have fallen from $16,000 in 1995 to just under $1500 today. At the same time, the number of pixels in the sensor has climbed from 1.3 million to 6.3 million.
Over the past few years, Canon has been making improvements aimed at increasing image quality. These include on-chip noise-elimination, complete pixel-charge-transfer and on-chip analog processing technologies. The first is, essentially, a device profile that works on a pixel-by-pixel basis. This provides a prediction of what the noise will be for a given exposure. Circuitry then combs that noise out, boosting the sensor signal-to-noise ratio. The second improvement reduces the occurrence of random noise by forcing identical initial values for each pixel every time it is read. The last innovation incorporates a gain amplifier on the chip, which reduces noise and allows fast signal reading.
Westfall also asserted that these advances have solved another problem: noise generated during a time exposure. This shows up as white specks in an image and affects both CMOS and CCD sensors. In the past, camera vendors either warned against long exposure times or prevented them by setting a maximum shutter speed. With Canon’s approach, this is no longer necessary.
“The technology that we have developed for long exposure noise reduction in the CMOS chip that we use is quite effective for exposure times out into the five-minute range and longer,” Westfall said.
There are still areas where CMOS sensors must be improved. There is widespread consensus, for example, that the dark current must be further reduced and the signal-to-noise ratio increased even more. There are various ways this can be done. Different pixel layouts can help, as can the use of on-chip circuitry. Changes in the processing and fabrication of the sensors can eliminate leakage paths both along the surface and into the bulk of the semiconductor. That will slash the dark current. Special layers can also be incorporated into the sensor to further cut the dark current or improve the signal-to-noise ratio.
However, as these manufacturing steps are taken and implemented, a different problem emerges. One of the advantages of the original CMOS technology was supposed to be the use of a readily available and standard manufacturing flow. Every processing tweak done to improve sensor performance makes this less and less the case.
“One of the original arguments for CMOS back in the beginning was that it’s cheaper, and it’s cheaper because you can use any foundry. There are many applications where that is still very true. However, I think that over time, as more and more sophisticated devices are being manufactured on the most up-to-date processes, that argument is getting weaker and weaker,” admitted Caleb Williams, product manager for high-speed sensors at Micron.
Williams did add that foundries have been specifically set up for image sensor fabrication, and that his company plans to continue to work with several of them on a variety of products in the coming years. He also noted that Micron’s advanced CMOS processes for memory chip manufacturing have helped lay a path for imager technology. Finally, he said that the company is making CMOS sensors because of the ease with which they can be manufactured in its existing facilities. Thus, it is clearly possible to get good sensor performance using a variation on standard processing.
Through a dielectric veil
There is another issue, though, that increasingly confronts manufacturers. As semiconductor feature sizes shrink, the number of interconnect layers grows. So while a 0.5-μm CMOS process may have three layers of metal interconnect, an 0.18-μm process may have four, five or six. The additional layers allow circuit designers to achieve greater density. If a CMOS sensor incorporates electronics into a device — the camera-on-a-chip idea — designers would like to be able to use all of the layers of metal interconnect available.
The problem is that a dielectric film must accompany every layer of metal. The film acts as an insulator and prevents the layer from shorting out to other metal traces above and below. Optically, the dielectric also has an impact. As one dielectric film is stacked on top of another, fewer and fewer photons can fight their way down to the sensor sitting in the silicon at the bottom of the heap.
There are some solutions to this problem. One is to put the dense electronics on one chip and the image sensor on another. This is expensive and kills the camera-on-a-chip idea. Another solution, which is being implemented by manufacturers such as Tower Semiconductor, is to simply use a CMOS process with fewer layers of metal and fewer dielectric films. The drawback here is that this does not allow the most efficient and state-of-the-art circuit design. A third approach is to get around the problem by avoiding it altogether. One possibility would be to manufacture the silicon, thin it by back-side lapping to the point of transparency and mount it on something for mechanical strength. Illumination of the sensor would be from the back.
This problem must be solved in some fashion. The feeling among vendors is that for CMOS sensors to realize their full potential, a solution must be found to the dielectric film issue. Otherwise, the imagers will not benefit from the march of semiconductor technology.
“Basically, what we can see right now, if you are going on the smaller technology, you don’t really gain the big advantage,” said Vladimir Korobov, process application manager at Tower Semiconductor.
MORE FROM PHOTONICS MEDIA