- CMOS Challenges, What’s Next for CMOS Image Sensors?
Competing with CCDs means meeting or exceeding all of their critical price and performance characteristics.
The use of complementary metal oxide semiconductor (CMOS) fabrication processes to produce image sensor array chips was first demonstrated in the late 1980s. Since then, the goal of integrating all analog and digital imaging circuits on a single die with a DRAM cost model has continued to be the Holy Grail for designers. Along the way, though, CCD makers have aggressively shrunk arrays and slashed prices to keep CMOS sensors from getting a foothold in the market.
For almost 30 years, CCDs have dominated the market in terms of speed, sensitivity, reliability, packaging and price for high-volume electronic image capture. Product designers are unwilling to compromise on the performance their customers have come to expect, and price discounts alone will not induce them to switch to CMOS sensors.
The value of resolution beyond three megapixels falls off rapidly, not because of memory requirements, but because of limits in display resolution (both for computer monitors and the Web) and the fact that 8 x 10-in. printouts are satisfactory for most people at three megapixels. This leaves CMOS sensor makers with a double burden: to meet or exceed all critical price and performance characteristics of CCDs, and to deliver advantages to product developers that CCDs don't offer.
Cost competitiveness will neccessitate addressing two factors. The first is array format. Smaller sensors mean smaller, cheaper optics -- up to a point. Today, for example, megapixel (i.e., 1280 x 960) CCDs are available in 1/4-in. format. For CMOS technology to achieve this pixel density with a typical 4:3 aspect ratio would require a 2.5-µm-pixel dimension [(16 mm 4/5/1280 pixels] -- far smaller than most CMOS sensor pixels today.
CMOS devices will continue to fight the fill-factor issue as designers try to scale pixels down to match CCD array formats (Figure 1). This is driving them to seek smaller transistor design rules to maintain fill factors during scale-down. Microlens arrays alone are not sufficient to compensate for a shrinking fill factor. With smaller feature sizes comes lower saturation voltage, which limits dynamic range. Smaller pixel capacitances also lead to greater thermal noise.
Figure 1. As CMOS sensor designers continue to fight a fill-factor problem that CCD array formats don't face, some researchers are pursuing amorphous silicon grown over the active transistor circuitry to maximize fill factor while eliminating the need for a microlens array.
The second cost determinant is total process yield. This encompasses not only that of the silicon, but also that of the optical postprocesses, packaging and testing. Because CMOS sensors are optical by nature, care is necessary to minimize contamination at every step. Once devices are coated and packaged, any trapped debris will be forever evident in flawed (dark) pixels.
One interim option is to develop optical test strategies to "grade" the sensor based on defect density and location parameters determined by the application. Such a test would use a white-light source that is highly uniform in both intensity and angular distribution as seen by all pixels. Ideally, all manufacturing steps from silicon wafer processes to packaging would occur in the same cleanroom to minimize exposure and handling. This also would eliminate the need for the typical passivation layer between the silicon and the optical layers, which can steal up to 20 percent of the light.
Current CMOS devices have acceptable noise levels and a dynamic range superior to that of CCDs, but to compete with CCDs' performance, CMOS manufacturers will need to improve photodiode sensitivity by 30 to 50 percent without adversely affecting speed or these other factors. This is especially important with devices operating under diminished lighting or at video rates where exposure times are limited to 33 ms.
One remaining problem is the rolling-snap shutter. Because pixel architectures rarely enjoy enough room for a dedicated storage capacitor in each pixel, the charge in each pixel row must be collected and read out with a small delay after each row. This produces an objectionable shear effect when capturing horizontally moving objects at video rates, making CMOS sensors unsuitable for camcorders and many quality video applications.
Sensors and packaging
Most image sensor designers have backgrounds in analog or mixed-mode chip design, with little or no experience dealing with optical issues at the pixel or array scale. This is evident in their dependency upon vendors of optical postprocessing services to recommend materials and specifications. With little or no appreciation for the three-dimensional structure of a specific pixel design, these vendors can deliver only rough guesses as to what should be used.
There also is little understanding of how to intricately balance silicon's geometric design and spectral response with the design of the optical black (light shield), the color filter array, the microlens array and even the objective lens used by the end-product designer. For instance, it is necessary to carefully match the color filter array to the silicon spectral response along with an IR cut filter to ensure a natural white balance (RGB) under daylight conditions (to which the spectrum of any strobe flash lamp is also matched).
In addition, designers must optimize the optical black and microlens array to deliver focused light to the heart of the photon absorption region, while maximizing the acceptance angle of incoming light rays and preventing optical crosstalk. Crosstalk occurs when the array is exposed to light of one color, such as red, and there is also some signal from adjacent green or blue pixels (Figure 2). This is not to be confused with electrical crosstalk, where, under white-balanced light that is modulated pixel by pixel (e.g., checkerboard), dark pixels generate a signal. A wide acceptance angle is especially important for the corner pixels viewing the cone of light from the back of the camera lens at a steep angle. Mismatch in this design promotes excessive roll-off of image brightness in corners.
Figure 2. To reduce optical crosstalk, CMOS manufacturers must take care to ensure that highly off-axis rays are not absorbed by the wrong photodiode.
Another issue often not given due consideration is package selection. The key here is to pay attention to heat dissipation because noise is a strong function of die temperature, and proper ground planes and shielding are necessary to prevent system-level noise from corrupting low-level analog signals. Designers have thus come to appreciate the difficulty of integrating processors onto the sensor die because of thermal and noise problems. Because processors are usually high-pin-count devices, and sensors usually low-pin-count, a true single-chip camera is unlikely in the near future for all but the simplest and least demanding vertical niches, such as toys, low-resolution PC cameras and very inexpensive digital still cameras.
Designers can leverage packaging at the system level by simplifying the interface between die and camera lens, eliminating components such as a cover glass and its attendant assembly steps and optical losses. They also are beginning to incorporate the IR cut filter into the camera lens by using a dichroic coating on the last (closest to the sensor) lens surface. This is necessary to prevent silicon sensitivity in the near-IR from overwhelming the light signals in the visible region.
Logistically, managing multiple vendors for silicon, optical postprocessing, packaging and test can distract CMOS designers from concentrating on their architectures and layouts. Coordinated design specifications for all steps in the production process would facilitate up-front optimization of the design for better performance and reliability.
Ultimately, a fundamental limit to new CMOS designs is the inevitable trade-off between volume and valued sensor capabilities. No one wants to pay for more functionality than is needed from a sensor, which challenges designers to add functions that either unleash previously untapped benefits or that leverage cost and performance advantages at the system level.
One promising concept involves making image sensors smart enough to anticipate and adjust to widely varying scene conditions. Today, most camera systems expect the image signal processor chip to manage the sensor brightness and white balance settings for an arbitrary CCD or CMOS device. This can create a lag in camera readiness, a problem with today's digital still cameras. A sensor always ready with the best pixels could mean faster time-to-shot and simpler image processing, even with superior image quality. Such sensors could be self-managing and upgradable as better algorithms are developed. It is far better to make these adjustments at the image source and to clean up imperfections that would otherwise be foisted upon the image signal-processing chip.
Until tighter manufacturing controls all but eliminate sensor defects, implementation of an on-chip defect-correction scheme at the signal source could simplify the system and optimize results. Closely related to this is the notion of virtual pixel scaling, where neighboring pixels are electronically combined within the array to instantly trade off image resolution for higher sensitivity. In addition, a sensor with versatile image formatting (in the sense of real-time subwindowing and subsampling) could provide features such as antijitter and true digital zoom without requiring frame-buffering memory.
Figure 3. Different white-light levels between colored pixels can produce cyan and yellow stripes across an image, which dictates the need for an optical low-pass filter in higher-quality cameras.
Every sensor in volume production uses separate photosites for the RGB color information from the image. Schemes for better collocation of these pixels within the array can eliminate the need for an optical low-pass filter, which is used in higher-quality cameras to share the light-level information among neighboring pixels of different colors. This reduces (but does not eliminate) the color-aliasing or moiré effect for scenes with fine-edged details (Figure 3). Eliminating the need for the optical low-pass filter would have the added benefits of sharpening the picture and reducing system cost.
Several trends in CMOS design and manufacturing show signs of becoming de facto standards. Integrated production services from silicon fabrication through packaging and test will become the norm. Sensors will shrink until the cost of the camera lens rises rapidly because precision production of tiny optical components is not scalable by equipment or processes, as is the case with silicon wafer production (Figure 4). Designs will move beyond active pixels (those with transistors at each pixel) to smart pixels that automatically respond to a variety of scene conditions. On-sensor signal processing will remain primarily in the analog domain until pixel arrays become fundamentally digital architectures.
Two key industry questions remain. First, will CMOS ever become the dominant image sensor architecture, and second, can the technology open whole new major markets unavailable to CCDs?
Figure 4. Sensor designers must work to balance cost and performance in an environment where tighter array geometries still require more-expensive lenses with higher resolving power.
The first question can be addressed by aggressively pursuing a combination of cost, performance and circuit integration until system designers begin to prefer CMOS devices to CCDs. The second question invites an exciting industry paradigm shift. At present, companies in markets such as consumer electronics do not develop and build CCDs in house because their cost and complexity are not justified by volume requirements. But, when CMOS devices become as easy to develop as digital application-specific integrated circuits are today, end-product designers should begin to make widespread use of a myriad of CMOS architectures to add unique features and conveniences.
One powerful enabler will be the all-digital pixel, which will facilitate on-chip digital image processing at the pixel, column and array levels. More importantly, chips incorporating an all-digital image sensor could be fabricated in leading-edge digital foundries rather than in specialized CMOS- sensor mixed-mode foundries. This approach could deliver on the long-awaited promise of high system integration at the lowest cost.
Meet the Author
Rudi Wiedemann is president of Wiedemann Associates, a marketing consulting group based in Fremont, Calif., that specializes in image, capture and display technologies. He holds a BS in physics with postgraduate work in laser resonators and beam propagation.
MORE FROM PHOTONICS MEDIA