Image capture of real-world scenes – and blending it with virtual information represented as visual patterns – is vital in a growing number of industrial and consumer applications. It is used in intelligent transportation systems such as traffic monitoring and flow optimization; security applications; and day-to-day recreational use in smartphones, tablets and PCs. A major reason for this accelerated implementation of image capture is cost reduction, which fuels the integration of still and video cameras in large data-processing environments. Lately, security and consumer-convenience products have driven the market to improve quality of life and ease of use. As image-capture applications are opening up in a broad variety of products and systems, there is no one specific image sensor to serve all those different conditions. Every system needs a specific imager designed to match its requirements (Figure 1). Figure 1. Image sensors are essential for a broad range of digital systems and products. Photo courtesy of CMOSIS. This is especially true for industrial vision applications. Digital cameras – especially for narrowly defined commercial, industrial or administrative purposes – require carefully-laid-out image sensors adapted to those applications to deliver the best possible results in terms of resolution, noise, speed, spectral sensitivity, robustness, life cycle and price point. A comparison with natural vision capabilities in animals is helpful in this regard: Birds of prey, snakes and cats all depend on different kinds of visual input in terms of spectral sensitivity and resolution, having evolved to the specific circumstances and necessities of their lives. This is where CMOS sensors and cameras come in. They deliver good imagery, but another advantage is that they can be narrowly tailored to their specific realms of application. In addition, CMOS sensors, by virtue of their operational principle, can integrate image capture with multiple support functions on-chip; these support functions could include image processing for high dynamic range, correlated double sampling, noise suppression, windowing and subsampling, high-speed analog-to-digital conversion and LVDS interfacing. All this leads to more compact camera designs, better connectivity and system compatibility, and ease of use. A new patented method drastically reduces the impact of wafer-processing-stage defects on CMOS image sensors, enabling the manufacture of large sensor devices on an economic scale. Sensor market trends High data throughput is very much in demand for industrial vision: Users want to get as much data off their imagers as possible, usually seeking the highest possible frame rate for a specific application. Image-processing systems have increased data throughput tremendously over the past few years, from 30-60 fps to ~120-240 fps. Sensors and cameras have to accommodate these higher frame rates to allow for higher production and inspection rates. Table 1.Key Specifications of the CMV Series Specification CMV300 CMV2000, CMV4000,CMV8000, CMV12000 CMV20000 Pixel Size 7.4 µm 5.5 µm 6.4 µm Full Well Charge 30,000 e- 13,500 e- 15,000 e- Sensitivity 6 V/lux.s (550 nm) 4.64 V/lux.s (550 nm) 8.29 V/lux.s (550 nm) Dark Noise 20 e- 13 e- 8 e- Dynamic Range 64.1 dB 60 dB 66 dB SNRmax 45.1 dB 41.3 dB 41.7 dB Parasitic Light Sensitivity 1/50,000 1/50,000 1/50,000 Dark Current 100 e-/s (@23 °C) 125 e-/s (@23 °C) 125 e-/s (@23 °C) Fixed Pattern Noise Power Consumption 600 mW (CMV2000, CMV4000)900 mW (CMV8000)3000 mW (CMV12000) 1100 mW In terms of frame rate, CMOS cameras have improved to the point that they can outperform traditional CCD-based imagers. A new off-the-shelf 12-MP digital image sensor, the CMV12000 from CMOSIS, delivers 300 fps at full resolution (10 bits per pixel). Even higher frame rates are feasible in windowing or subsampling modes. The same fast-paced CMOS sensor progress applies to pixel count, or resolution. It has gone from 1.3 MP (SXGA) to 2, 4, 8, 12 and 20 MP. Higher resolution enables a camera to capture one big overall image containing numerous details that can be analyzed individually (Figure 2). This is especially important in traffic-management applications, where one high-resolution camera can track, for example, four lanes of traffic instead of having to deploy four individual-lane cameras. Figure 2. Capturing a large field of vision with great detail. Photo courtesy of CMOSIS. In traffic and video recording applications, 3.5- to 4-K resolution (4096 × 3072 pixels) is the standard today. In high-end inspection and surveillance applications, for instance with flat panel inspection or aerial mapping tasks, resolutions can go up to 20 MP. Beyond this limit, the sensor would get too large to fit in a space-constrained application. However, the general trend points to higher resolutions for global-shutter cameras. It might take another year to get up to 40-50 MP. In this regard, rolling-shutter sensors are still defending their turf: They can offer counts of 70 MP, but one drawback is the motion artifacts that occur with fast-moving objects. Demands on image-capture systems High sensitivity: This should be coupled with low noise levels as the foremost consideration of industrial users. High sensitivity will deliver enough image data at short exposure times. Low noise and high sensitivity also allow operation at a low light intensity by applying gain if needed. High sensitivity across the visible spectrum should be accompanied by increased sensitivity in the near-infrared. Low cost: Minimum system cost is best achieved via high volume production. High frame rate: This enables several shots of an object in rapid-fire sequence to track and document its movements. It is supported by the technique of exposing one image while the image previously taken is being read out. Ease of use: This pertains to implementing all required image-processing functions onboard the sensor system, and programming the exposure and readout modes through an SPI (serial peripheral interface). No image correction: Reading out image data in the RAW format will yield images as noise-free as possible, with no costly post-processing of the captured images needed. High resolution: This equals a large field of view for delivering high image detail. Global shutter for CMOS sensors Offering smaller pixel formats in combination with a global shutter is another major progress that CMOS imagers have shown in the past few years. Interline CCD image sensors have offered a global shutter by design; CMOS sensors now competing with those older imagers need to offer a global shutter as well. Figure 3. A rolling-shutter design (a) causes moving objects to be depicted with skewed lines and fast-moving objects to appear skewed; interfering flashes expose only part of the frame. This is not the case with global-shutter designs (b). Photo courtesy of CMOSIS. A global shutter exposes all pixels of a sensor at the same time and over the same duration. It is a more complex concept – and initially more costly – because it requires some kind of a local storage element (usually a capacitor) inside each pixel, plus some control function to start and stop the exposure. All this enlarges pixel size. But CMOS technology has progressed to a degree that the storage nodes inside a pixel can now be trimmed down to a reasonable size. These capacitive storage elements hold the pixel values for reading them out sequentially, one pixel at a time, after the exposure stops. A rolling shutter, in a marked difference to this scheme of simultaneously exposing all pixels, will expose an image sequentially, row by row, top to bottom, at different moments in time. This causes artifacts, which skew fast-moving objects as the exposure follows or deviates from their horizontal or vertical position at any given moment across the image plane (Figure 3). Another artifact occurs when illuminating the scene with a short-burst flash. The result is that only a few rows or parts of the image are exposed, whereas other areas remain dark. The rolling shutter is the traditional method, because it is much easier to build a pixel architecture adapted to the row-by-row exposure scheme. A CMOS sensor with four-transistor pixels usually comes equipped with a rolling shutter. Figure 4. Different exposure times for odd and even lines achieve a higher dynamic range. Photo courtesy of CMOSIS. The complication when providing a global shutter in a CMOS sensor is placing the storage capacity inside the pixel. This takes up space and leads to a larger pixel layout, which is more expensive. But global-shutter technology has greatly improved to make smaller storage nodes feasible at reasonably small pixel sizes and at lower price points. Advanced global-shutter CMOS sensor designs feature pixels down to 5.5 × 5.5 µm. The goal is having 3.5-µm pixels in global-shutter cameras. These extremely small pixels will likely be available within a year or two. Of course, this scaled-down CMOS layout requires fabs or foundries with wafer-processing capabilities that can accommodate these small pixel dimensions. It also needs design know-how to create the appropriate pixel architecture and technology. One complication is that the active area of a global-shutter pixel is slightly smaller than that of a corresponding rolling-shutter pixel, but a microlens in front of it compensates for this loss of light input. Figure 5. Piecewise linear forming of sensor response curve achieves a higher dynamic range. Photo courtesy of CMOSIS. Eight-transistor pixel architecture Fitting a global shutter to a CMOS image sensor requires more complex pixels, but using a specific architecture overcomes this obstacle. The patented eight-transistor (8T) global-shutter architecture from CMOSIS differentiates it from the traditional 4T rolling shutter or the 5T global shutter. The crucial point is that 8T architecture provides two storage elements inside the pixel, rather than just one (as in the 5T structure). They separately store one image taken at the beginning of the exposure, and another at the end of the exposure period. Deploying a clever algorithm, both these images are subtracted during readout to lower the total noise account and increase the shutter efficiency. This way, noise levels below 10 electrons can be reached, and a shutter efficiency of 99.999 percent has been demonstrated. This technique, called correlated double sampling, enables the lowest fixed-pattern noise and low parasitic-light sensitivity compared to 5T layouts or other designs. Figure 6. Enhanced sensor sensitivity in the near-infrared, as demonstrated in the CMV family from CMOSIS. Photo courtesy of CMOSIS. Time-delayed integration Time-delayed integration (TDI) imaging is another clever way to better capture moving objects. By synchronizing pixel exposure with the motion of the camera or the object, the effective exposure time can be increased. TDI implementation in CMOS has traditionally been difficult because of the lack of a charge-addition circuit. The application requires the combination of a global shutter and a low-noise readout method. High dynamic range Another factor in improving global-shutter CMOS sensors was applying a specific method to achieve a high dynamic range (HDR). HDR expands the scale of the captured light and dark areas of an image to depict them in a satisfactory way. This is the case when looking at the sky or very bright light sources, which tend to be overexposed and appear as blurred-out white areas, and the darkest shadows, which tend to be underexposed and recede into an unstructured black. Both light and dark exposure levels have to be equalized across the image. The reason for the unbalanced treatment of light and dark areas is the linear response curve of image sensors as opposed to the exponential, or logarithmic, behavior of the human eye. HDR is helpful in traffic applications for subduing glaring reflections when reading license plates or for countering the effects of bright headlights. Figure 7. CMV-series sensors from CMOSIS were developed for machine-vision applications. Photo courtesy of CMOSIS. The desired logarithmic response in a CMOS sensor can be achieved on-chip, in several different ways: by sequential image capture using widely varying exposure times; recording the light and dark areas separately by equipping the sensor’s odd and even rows with different sensitivities for light and dark, and thereby calculating an appropriate average value for all regions of an image (Figure 4); or by using PLR (piecewise linear response), a more logarithmic ramping of the sensor’s response curve (Figure 5). The user can choose a method depending on the application, as they all have their specific benefits and drawbacks. Enhanced NIR sensitivity Extending the spectral range of CMOS sensors to the near-infrared is becoming more and more important as a market trend (Figure 6). This especially applies to traffic applications but also to machine vision, because it allows illumination of the scene to be monitored with flash lights that are invisible to the human eye. Meet the author Pieter Willems is the manager of standard products at CMOSIS in Antwerp, Belgium; email: firstname.lastname@example.org.