Search Menu
Photonics Media Photonics Buyers' Guide Photonics EDU Photonics Spectra BioPhotonics EuroPhotonics Industrial Photonics Photonics Showcase Photonics ProdSpec Photonics Handbook
More News
Email Facebook Twitter Google+ LinkedIn Comments

  • Choosing a Scientific CCD Detector for Spectroscopy

Photonics Spectra
Mar 2002
Dr. John R. Gilchrist

Since the invention of the charge-coupled device by Willard Boyle and George Smith at Bell Labs in 1970, there has been an explosion in its use in scientific, medical and industrial imaging, and in spectroscopy applications.

CCDs represented a revolutionary step forward in detecting at UV to near-IR wavelengths, particularly for spectroscopy. Their two-dimensional nature and unique combination of sensitivity, speed, low noise, ruggedness and durability in a compact and relatively economical package still are significant advantages over single-channel detectors. CCD detector arrays are most useful when combined with an optical system to create either a conventional image or a spectral one.

Figure 1.
No single CCD detector configuration satisfies all spectroscopy applications. Users must analyze their specific needs to weigh the trade-offs in selecting the detector type, structure and cooling method. And proper optical design should not be forgotten. Images courtesy of Jobin Yvon Inc.

A multichannel CCD can simultaneously collect spectrally dispersed information over a wide range at high speed when used in combination with an aberration-corrected imaging spectrograph. Indeed, the two-dimensional nature of the detector in such a system enables the simultaneous measurement and analysis of multiple spectra from several spatial locations or sources.

CCD choice — and a caveat

Choosing the correct detector is important to the success of a spectroscopic experiment. But while the detector’s specifications can be a good place to start an evaluation, they are only one factor in a system’s performance.

Equally or more important is the optical design that collects the signal and images the light from the sample into the spectrograph, along with the choice of spectrograph, its configuration and its optimization for the particular application. After all, if you can’t efficiently transmit the light to the detector, it makes no difference whether you use a $10 CMOS linear array or a $40,000 liquid-nitrogen-cooled scientific CCD.

That said, in selecting a CCD detector for a spectroscopic application, the user must evaluate detector type, size, cooling method, quantum efficiency, read- and dark-noise performance, pixel size, full-well capacity and controller-dependent specifications such as analog-to-digital conversion operation. From these values, the maximum and minimum signals can be calculated, along with the signal-to-noise and dynamic range of the detector under various illumination conditions. A more rigorous evaluation also may take into consideration the charge-transfer efficiency, linearity, pixel uniformity and etaloning performance.

CCDs are multichannel silicon array detectors with metal-oxide semiconductor architectures. These one- or two-dimensional arrays consist of individual detector elements, or pixels, defined by the capacitor, or gates. By changing the gate voltages, the charge can be stored and transferred.

Photogenerated charge is produced under a metal-oxide semiconductor capacitor or photogate. After charge generation, the signal transfer occurs in the capacitors for all the devices; hence, the popular use of the term “CCD detector.” Some interline-transfer devices employ a hybrid arrangement with photodiodes to create the charge and transfer it to the CCD structure. In nearly all cases, the detector is called a CCD, but it really should be called a solid-state detector.

Most scientific spectroscopy applications use two-dimensional full-frame CCD detectors that are photosensitive across the full active surface area. In these scientific devices, the readout registers are arranged along one edge of the detector. When aligned parallel to the direction of spectral dispersion, they are ideally suited for spectroscopic applications.

The photogenerated charge in the imaging area is moved to the read-out register by a series of parallel shifts, sequentially transferring charge from one pixel to the next within a column, until the charge finally collects in the readout register. A mechanical shutter blocks any light to the detector during the image-transfer operation so that the image is not smeared and the resulting spectra useless.

Other types of CCDs include frame- and interline-transfer devices. In the former, the detector consists of two areas, one for optical detection and the other for signal storage. The storage area is identical in structure to the imaging area but is covered with an opaque mask to prevent exposure to light. After exposure, the signal in the exposed area is quickly transferred to the storage area, from which the slower process of readout from the device can be efficiently performed. While readout occurs, the next frame may be collected, improving the duty cycle.

Interline-transfer CCDs consist of photodiode detectors that are separated by vertical transfer registers covered by opaque shields — rather like vertical venetian blinds in structure. After exposure, the signal is very quickly (on the order of 1 μs) transferred from the exposed area to the transfer register, minimizing image smear. A disadvantage is that the optical fill factor is a relatively poor 20 percent or so because the active detector occupies only a small part of the pixel area. But interline-transfer devices do not require a shutter, and this may be beneficial in high-speed applications.

Other solid-state detectors, such as charge-injection devices and active-pixel sensors, have become popular and often are incorrectly called CCD sensors. The pixels in the former are composed of two metal-oxide semiconductor gates that overlap and that share the same row and column electrodes. Because all pixels in a given row are tied together in parallel, the device’s capacitance is high. Compared with a true CCD, the output signal is relatively small and, thus, the signal-to-noise ratio is poor. Nonetheless, the readout from a charge-injection device is nondestructive and, if repeated many times, can be averaged to improve the signal-to-noise performance significantly.

First reported in 1993 by Eric R. Fossum’s team at NASA’s Jet Propulsion Laboratory in Pasadena, Calif., the active-pixel sensor consists of a photodiode, a reset transistor and a row-select transistor. These devices can be highly integrated, manufactured using CMOS technologies, and they often contain the analog-to-digital converter along with a correlated double-sampling circuit. The photodiode drives the line capacitance, and the detector does not rely on charge transfer as in a CCD detector. The pixels can be fully addressable and may be read at high frame rates, especially if a subregion is selected. However, because each pixel has its own amplifier, fixed pattern noise can be an issue. In addition, the devices are CMOS structures and tend to have higher dark-noise levels than metal-oxide semiconductor structures because of doping considerations.

A variety of full-frame CCD devices are available, including front- and back-illuminated and open-electrode configurations (Figure 2). Deciding which is best demands that one consider the requirements of the application (Table 1). In general, selections should be made based on the wavelength range of interest, spectral coverage and resolution, and the expected optical signal level. These requirements determine the chip type, total area, pixel size and cooling method.

Figure 2.
Several full-frame CCD device structures are available, including front-illuminated, back-illuminated and open-electrode (not shown). In front-illuminated CCDs, incident photons must penetrate a polysilicon electrode before reaching the depletion region. The quantum efficiency of the back-illuminated (or “back-thinned”) CCD, in contrast, escapes the influence of the electrode by exposing the sensor region through the bulk silicon substrate.

Response and resolution

The quantum efficiency of the detector is a reasonable indicator of its spectral response. All CCD detectors display wavelength-dependent quantum efficiencies because the absorption coefficient of silicon is wavelength-dependent. Short-wavelength photons (i.e., blue) are absorbed at much shallower depths than those with longer wavelengths (i.e., red).

In fact, the absorption coefficient is essentially zero at wavelengths longer than 1.1 μm because the photon energy is less than the silicon bandgap energy and the detector becomes transparent to the light. This translates into a long-wavelength cutoff of 1.1 μm. If the application requires spectral measurement in the near-IR (for example, in the 1- to 1.7-μm range), a linear InGaAs array may be the best choice.

To further complicate matters, the detector’s quantum efficiency is also dependent on the operating temperature of the chip because the photon absorption length increases as the silicon bandgap increases with decreased temperature. This effect is important in the near-IR.

For example, at 1.06 μm and a decrease in temperature of 100 °C, the detector’s quantum efficiency may be reduced by a factor of 3.5. The effect actually can be advantageous. It can improve slightly the quantum efficiency in the UV or in the near-visible wavelengths, because the longer absorption depth permits shorter-wavelength photons to reach the depletion region and thus contribute to photogenerated carriers. Most chip manufacturers supply quantum efficiency curves at their own test temperatures, however, which makes comparison difficult.

In front-illuminated CCDs, the incident photons must penetrate a polysilicon electrode before reaching the depletion region. The transmittance of the electrode depends on its thickness, so chips from different manufacturers demonstrate different quantum efficiency responses in the 400- to 600-nm range. The electrode becomes opaque to wavelengths below 400 nm.

A variant of this CCD that offers a higher UV quantum efficiency is the so-called front-illuminated UV CCD, in which the detector is coated with a phosphor such as Lumogen. These phosphors convert UV radiation to green light, providing a 10 to 15 percent quantum efficiency response in the 120- to 450-nm spectral range. The advantage of the coating is that, because it is transparent to these wavelengths, it does not degrade the visible and near-IR response of the detector.

Another variant of this CCD is the open-electrode configuration, in which the central area of the electrode is etched to expose the underlying photosensitive silicon. This offers an uninterrupted pathway for the incident radiation to reach the depletion region. Such detectors exhibit quantum efficiencies of 30 percent or greater in the UV (Figure 3).

Figure 3.
The open-electrode CCD offers improved spectral response over the standard front-illuminated chip by etching the central area of the electrodes to expose the photosensitive region beneath. Open-electrode CCDs also avoid the etaloning that can plague back-illuminated chips in the near-IR.

In addition, their visible and near-IR response often is superior to front-illuminated devices. Compared with back-illuminated devices, open electrode CCDs do not exhibit interference fringe effects (also called etaloning) in the near-IR. Etaloning performance is important when comparing devices because the recorded signal on a back-illuminated device, for example, may be obscured as a result of etaloning effects, despite an excellent quantum efficiency.

Back-illuminated, or back-thinned, CCDs are full-frame image sensors in which the substrate is polished and thinned to remove most of the bulk silicon substrate. They are illuminated from the back, and the polysilicon on the front does not influence the quantum efficiency of the detector. They are usually antireflection-coated for enhanced response in either the UV or the near-IR.

Back-illuminated devices, however, are significantly more expensive than their front-illuminated and open-electrode counterparts. They often require liquid-nitrogen cooling to reduce noise, and they can exhibit problems in response uniformity across the chip as a result of the thinning process. Most significantly, etaloning can be severe when the devices are used to measure at wavelengths longer than ~650 nm.

The severity of the etaloning effect manifests itself as an oscillation superimposed on the spectrum being measured, and it should not be underestimated. In the range around 650 nm, reflections from the boundaries of the back-thinned device form a constructive and a destructive interference pattern. A normally featureless spectrum will show a highly structured response, with peak-to-peak amplitudes of up to 20 percent of the average pixel response.

Recently, antifringing technology has become available that reduces significantly — but does not eliminate — the effects of etaloning.Despite the disadvantages, and if cost is not an object, back-illuminated devices frequently are the best choice in applications that demand very high performance.

The dispersion of the spectrograph and the size of the pixels in the CCD detector determine the spectral coverage and the resolution of the complete system. For an equivalent dispersion, smaller pixels provide better resolution. Common chips for spectroscopy are 1024 pixels wide by 128 or 256 pixels high, with 26 x 26-μm pixels. Those requiring higher spectral or spatial resolution often feature a 2048-wide by 512-high detector because of its 13.5 x 13.5-μm pixels.


The operating temperature of the detector determines the dark-current noise. As a rule, the dark current from a CCD detector is halved for every 9 °C drop in temperature. Two methods are commonly used: liquid-nitrogen cooling, which offers the best performance for noise reduction, or thermoelectric cooling using a two- or four-stage Peltier device, which offers convenience and uninterrupted operation. The spectral region of the experiment also should be considered because the quantum efficiency in the near-IR is strongly dependent on temperature.

Liquid-nitrogen-cooled detectors using a Dewar assembly can operate continuously for 24 to 72 hours, depending on the size of the Dewar. These detectors offer the lowest dark-current leakage and shot-noise levels without sacrificing charge-transfer or quantum efficiency, and they mainly are used when light levels are at their lowest and long integration times are required. Typical dark-current levels can be 1 to 3 e per pixel per hour at —133 °C, depending on the chip structure.

A multistage thermoelectrically cooled system is often a good choice for users who require detector performance approaching that of liquid-nitrogen cooling but who prefer the convenience of continuous, uninterrupted operation. For best noise performance, a thermoelectrically cooled detector should operate in multiphased-PIN or advanced-inverted mode operation to suppress dark current and should experience virtually no loss in charge-holding capacity.

Typical operating temperatures are approximately 200 to 210 K for four-stage Peltier-cooled CCD sensors. At this temperature, typical noise levels are 1 to 2 e per pixel per minute. Temperature stabilization of the detector can be achieved to a fraction of a degree, which enables reliable and repeatable spectroscopic measurements over long periods.

Coupled with inherently low dark-current leakage, this cooling method is ideal for experiments that require integration times of milliseconds to several minutes. Consequently, a four-stage Peltier-cooled system can be comfortably used in all but the most demanding low-light-level applications.

If the photon flux is high, a less expensive two-stage Peltier cooling system may be used. These systems usually are quite compact and cool to approximately —25 °C, which enables the CCD to achieve dark currents of 2 to 5 e per pixel per second. Two-stage Peltier-cooled detectors are ideal for applications such as transmission or absorption, emission measurements and routine process monitoring.

The commercial CCDs used in consumer imaging applications, in contrast, usually are uncooled because of the high light levels involved and the very low bit depth of the analyzing electronics.

All CCD detectors display cosmetic blemishes or defects, regions of reduced sensitivity or regions with enhanced dark-current leakage. These blemishes might involve single pixels, pixel clusters or columns in the array. Each manufacturer defines and characterizes these blemishes differently, and it is often difficult to precisely compare their specifications.

Grade and dynamic range

There are three grades of scientific detectors: 0, 1 and 2. The most commonly used detectors are Grade 1 because they offer an excellent compromise between blemish specification and cost. For the ultimate performance, Grade 0 devices are recommended for their close-to-zero blemish specification, but they are approximately 40 percent more expensive than Grade 1’s. Inevitably, one must balance price and performance based on the requirements of the experiment.

The dynamic range of a CCD detector depends on the integration time of the measurement, the binning conditions and temperature of operation, as well as the spectral content of the signal to be measured and the array spectral responsivity. Because of these variables, the dynamic range can vary considerably.

The analog dynamic range of a CCD detector may be defined as the ratio of the largest measurable useful signal to the smallest detectable signal. The smallest detectable signal for a CCD detector is limited by its readout noise. The largest signal depends on the full-well, or charge-storage, capacity of the individual pixels in the array. The dynamic range, therefore, also may be expressed as the ratio of the full-well capacity to the readout noise.

In spectroscopy, the individual pixels are often binned (added) together to make “superpixels.” In this case, the maximum signal intensity is limited by the full-well capacity of the readout register. (Note: If pixel binning is used, the readout time is reduced, which can be advantageous in high-speed applications when spectral or spatial resolutions permit.)

The analog full-well capacity of the individual pixels is proportional to the pixel size (area), with smaller pixels having a smaller charge-storage capacity. The dark-current noise also depends on the size of the pixels, but is difficult to predict. Smaller pixels may lead to lower dark currents, but the increased electric fields in these pixels may produce dark current and its resulting noise, negating any advantage.

In practice, the full-well capacity of the 26 x 26-μm pixels in a 1024 x 256 sensor is approximately 500,000 e. By comparison, that of the 13.5 x 13.5-μm pixels in a 2048 x 512 sensor is approximately 150,000 e. The readout registers in both devices, however, are designed with a full-well capacity of about 600,000 to 750,000 e, and the noise levels of the detectors are similar. Therefore, the detectors are comparable in terms of analog dynamic range when used in a binned mode for spectroscopy.

Finally, a CCD system requires a means of digitizing the signal voltage from the detector. An analog-to-digital converter would normally perform this function. The detector manufacturer chooses the converter with an eye toward providing the fastest conversion time and the best noise performance at the highest conversion resolution (all for a reasonable cost).

The resolution of the analog-to-digital converter in the detector controller determines the digital dynamic range of the system. For example, a 16-bit converter offers 216, or 65,536, discrete levels of intensity resolution. In contrast, video-format detectors use 8- or 10-bit converters, yielding 256 or 1024 levels of resolution, respectively, which is not very useful for scientific spectroscopy.

Analyzing needs

CCD detectors offer many significant advantages over single-channel ones. Their two-dimensional structure enables the simultaneous measurement of spectra from multiple points. And their ability to achieve very high levels of sensitivity with low dark-noise levels makes possible the measurement of all but the weakest optical signals.

Nevertheless, no single detector configuration can satisfy all of the possible spectral and signal-to-noise requirements in spectroscopy. (The front-illuminated, open-electrode structure offers a useful general-purpose system for many.) Users must carefully analyze their experimental needs in terms of spectral regions of interest and light levels to accurately weigh the trade-offs involved in choosing a detector type, structure and cooling method.

In closing, a word of caution: A detector is of no use by itself. Again, the optical design is of equal or greater importance in an experiment. A review of the existing systems shows that 90 percent of the possible light to be collected often is lost before it reaches the entrance slit of the spectrometer. If you can’t efficiently transmit the light to the detector, you’ve wasted your money.

Meet the author

John R. Gilchrist is director of the Optical Spectroscopy Div. of Jobin Yvon Inc. in Edison, N.J. He holds a BS and a PhD in applied physics from Strathclyde University in Glasgow, UK.

Terms & Conditions Privacy Policy About Us Contact Us
back to top

Facebook Twitter Instagram LinkedIn YouTube RSS
©2016 Photonics Media
x Subscribe to Photonics Spectra magazine - FREE!