William L. Wolfe, Professor Emeritus, University of Arizona, Optical Sciences CenterInfrared system design is not, like some circuit design, a synthetic process. One cannot start by stating the problem and proceeding in an orderly fashion to a final solution. Rather, we guess a solution and explore its applicability and capability. It is an iterative process, which goes faster if the first guess is a good one. The quality of that first guess is a function of insight and experience, and insight comes largely from experience.
Figure 1. A five-step iteration process is the logical way to design an infrared system.
The design process that I have found to be most efficient is to calculate, in order, the geometry, the dynamics, the sensitivity and, finally, the optics. Then review the results, consider alternatives and try again — iterate, iterate, iterate. This process produces a suboptimum spectral region, altitude, range, optics diameter, optical system, scan technique, etc. This allows calculation of the various efficiencies and their effects on the performance. If you’ve assumed photon-limited detectors, check the assumption: Can you obtain one that good? Calculate the optical efficiency and atmospheric transmission, and refine assumptions about the source.
A final important step is a step back. Are there other approaches — a different altitude, spectral region, detector, optical system or systems? This can be the innovative step that wins a contract or generates a patent.
The geometry may be large or small, astronomical or microscopic. It may be in the sky, in space or on the ground. But the essential considerations usually include the angular resolution and total angular field of view, the number of pixels in the field and the range to the center and edges of the field.
The angular pixel or resolution element can be calculated as a simple ratio. It is usually square and the linear angle is given simply by the ratio of the side of the pixel to the detection range. When it is square, most designers state only the one angle. The symbols used here are a and b for the linear dimensions of a detector element and α and β for the angular dimensions.
The field of view is usually much larger and must be calculated with the following equation:
where Θ is the full field of view, S is the full length or width of the field and R is the perpendicular range from the sensor to the center of the field.
A similar equation applies to the other dimension of the field if it is not square.
A large angular field (Figure 2) complicates the calculation of the angular resolution. The range at the edge of the field is larger than the perpendicular range, so a projection is necessary. Thus, the angular pixel size is given by the two equations for the sides:
where α and β are the two angular measures of the resolution element, and x and y are the linear measures of the resolution on the object.
The edge of the field is, in some cases, the corner.
Figure 2. Determining angular resolution for large angular fields requires calculating equations in both directions.
The dynamics include the scanning frequency or the frame time, the line time and dwell time and, therefore, the required bandwidth. (The bandwidth is required for the next step, the sensitivity calculation.) In some special applications, the field and dynamics become almost trivial; for example, an infrared ear thermometer.
For real-time imagers, the frame time is 1/30 s, the (reciprocal of) US television frame rate. In some cases, it may be 1/60 or 1/50 s. For interceptor applications, closing rates or search times dictate the required frame time. For strip mappers, velocity-to-height ratio and resolution dictate the frame time.
The dwell time is the frame time divided by the number of pixels and an estimated scan efficiency, and multiplied by the number of detectors. The bandwidth is then 1 divided by twice the dwell time:
where N is the number of angular pixels in the field in the horizontal and vertical directions, m is the number of detector elements in those directions, ηsc is the scan efficienty and tf is the frame time.
The next step is to calculate the sensitivity.2 For an imager, this is usually the minimum resolvable temperate difference, which is closely related to the noise-equivalent temperature difference. Radiometric evaluations of target and background are inherent in sensitivity analysis.
For detection systems, sensitivity is indicated by the signal-to-noise ratio or sometimes the resultant probability of detection and false alarm rate. The initial calculation can be idealized, perhaps assuming that all efficiency factors are 100 percent. Because at least one other iteration will occur, it is premature to be precise. Approximate, simplified calculations 2 will aid in reducing the solution space.
The signal-to-noise ratio for a point source in terms of a specific detectivity is:
where D* is the specific detectivity; Φd is the power on the detector from the source; B is the effective noise bandwidth; τ is the transmittance of the atmosphere and the optics; ε is the emissivity of the source; Ad, As and Ao are the areas of the detector, source and optical aperture respectively; R is the range; and LλBB is the blackbody spectral radiance of the source. The overbars indicate weighted average values.
If the detector is limited by photon noise and the optics by the diffraction limit, then the equation is:
where η is the detector quantum efficiency, ηcs the cold-shielding efficiency, and Eqλis the flux density on the detector. The subscript q represents photonic quantities.
Most extended-source infrared applications deal with noise-equivalent temperature differences. The equations that provide this information are:
where g is 4 if there is recombination of carriers, or 2 if not, and the radiance values indicated by L are assumed to be integrated over the spectral band of sensitivity. The second equation is for a photon- and diffraction-limited system.
Assuming that things have not fallen apart in the previous steps, use the approximate, third-order equations to see if reasonable optics solutions are available. Also calculate the diffraction limit. The approximate (mirror) equations for the angular blur diameters based on the diffraction limit, spherical aberration, coma and astigmatism are, respectively:3
where Θ is the half-field angle.
The diffraction limit, as presented here, is related to the Rayleigh limit, but it really represents the diameter of the main lobe of the diffraction pattern of a circular aperture. The detector should just contain this blur.
Detailed optical design can usually generate an optical system that performs better than the blur equations predict, but the blur calculations are a good guide. Most good design software programs have libraries of existing optical designs. Some are also available in books3,4 and design programs.5
Step back and iterate
This is the time to consider all reasonable alternatives: a different detector and platform, a new spectral region, other optics, beamsplitters, and multiplexes in field, aperture, time space or even profession.
One important step that is often overlooked at this stage is renegotiating the specifications with the sponsor. Sometimes, when the drivers and their costs are identified, the sponsor can and will modify the requirements. Remember that the goal is to accomplish a task, not just to build specific equipment.
After obtaining a reasonable solution, analyze it in detail: Integrate the spectral signal times the spectral D*, integrate the noise spectrum, and carefully calculate the optimum spectral region.
Don’t consider the design to be complete until two final steps have been taken: Determine that you’ve met all the requirements and develop a clear procedure for making this decision.
The next stage is to assemble a team that is headed by the system designer and that includes mechanical and optical designers, detector and thermal experts, and those experienced in electronics. The system designer must give each discipline several assignments, including both a desirable goal and an absolute requirement. This apportionment of the error budget is critical. The team must negotiate and renegotiate elements of the system as opportunities and snags arise.
One interesting kind of trade-off is that the minimum required temperature difference contains both the modulation transfer function and the noise-equivalent temperature difference. The latter is generally better if the spectral band is broader, but the optical system may suffer from additional chromatic aberration. This trade-off crosses disciplines and must be negotiated.
The steps, described somewhat generally here, should become more meaningful as they are applied to the following problems.
The military uses these for reconnaissance, and NASA uses them for remote sensing. For a strip mapper, the key question is whether to use a push- or whisk-broom technique; the optical design determines the answer.
The swath width, resolution and required temperature sensitivity are specified along with the vehicle. The swath width and linear resolution can be used to calculate the required angular resolution. That determines the linear size of the resolution spot at the nadir, which, with the vehicle velocity, determines the line time.
The swath width and the resolution, on an angular basis, determine the number of pixels in a line and, therefore, the bandwidth. It often makes sense to calculate this on the basis of a single detector, but if the dwell time is too small, assume an array immediately.
The bandwidth can be inserted in the sensitivity equation with the properties of the scene and atmospheric and optical transmission to obtain the noise-equivalent temperature difference, noise-equivalent radiance difference, noise-equivalent emissivity difference, etc.
The military and the makers of the Cadillac (Figure 3) use these devices. Calculate the geometry as described, but the frame time is the dwell time, and vision factors need to be taken into account for real-time imaging. Use the Johnson criteria to determine the resolution needed for detection or recognition. The imaging array will cover the entire field, and the optics must cover that field with proper resolution.
Intercontinental ballistic missile (ICBM) midcourse detection
The target is relatively cool, from 250 to 300 K, and typically has an emissivity-area product of about 1 m2. It has the extremely cool background of outer space, an equivalent temperature of about 60 K. Optimizing the spectral region to reduce radiation from the sun, moon and other astronomical objects while maximizing the input from the target is key.
Figure 3. The night-vision system in the Cadillac uses Raytheon Co. ambient temperature technology. Courtesy of General Motors.
Calculate geometry and dynamics as above. Then use a very sensitive detector array and reduce the incident flux density with a good baffle tube, low emissivity and cooled optics.
ICBM launch detection
Detecting the enormous radiant energy of the launch plume provides early warning of a missile launch. Much of the energy is in the water and carbon dioxide bands, which the atmosphere absorbs. Fortunately, the plume has temperature- and pressure-broadened lines, the wings of which are not absorbed.
The infrared set should probably be in a geosynchronous satellite, so range will be a factor, but imaging is not required. The background will be that of the Earth and its atmosphere, depending upon the spectral band.
ICBM re-entry detection
The target radiates copiously and closes rapidly. The interceptor must also move rapidly. The mid-IR region can be used, with moderately sensitive detectors, but the detectors must be cooled immediately after launch, the window must withstand high temperatures, and aerodynamic flow cannot interfere significantly.
Aircraft collision warning
This is similar to protecting a military aircraft against missiles. The coverage must be complete, with emphasis on nose-to-nose collision.
Figure 4. A rocket carrying a prototype missile
interceptor launches from Meck Island. Infrared systems are involved in
missile launch detection, midcourse detection and re-entry detection.
Courtesy of the US Department of Defense.
The geometry is the forward hemisphere (large), but the resolution need not be that of the aircraft. It must be such that the target plus background minus the blocked background is greater than the variation in the background. That will determine the number and size of the pixels.
Reaction time and closing velocity will determine the background. The plume, engines and aerodynamic heating provide signal. The issues are whether the device will be a scanner, fly-eye or fish-eye system, and the signal-to-noise ratio must be high enough to provide a good probability of detection and low false alarm rate.
There are two types of seekers: reticles and imagers.
For a reticle, the field must be large enough to include the target despite missile pitch and yaw. The image of the target must fit inside the blades of the reticle to obtain sufficient modulation. The reticle frequency sets the bandwidth. The signal is usually the plume and engine radiation from the plane. The plume has the same spectral problems as in the early warning system.
The imaging version need not provide good imagery; even if several pixels are excited, centroid and related algorithms can be employed.
These have been in geosynchronous satellites at an altitude of about 33 Mm. They cover fields of view that approach the entire projected Earth (about 20° and about 0.1 sr), but the resolution is modest because cloud structures are large (about 5 to 10 km and about 0.1 msr).
The dynamics are kind. An entire scan can take many minutes — say, 15 — dictated by the dynamics of the weather. This would mean about 1000 pixels in 1000 s for a bandwidth of about 1 Hz. The required sensitivity is on the order of 1 K.
The main issue is long-term reliability. Current devices use two HgCdTe detectors, one for redundancy, cooled by a radiative cooler. Future versions might use uncooled detector arrays and offer faster coverage, higher resolution and greater agility — perhaps local area scans in less time.
Infrared ear thermometers
The geometry and dynamics are simple. The devices look at the eardrum (tympanum) within a second or two (Figure 5).
Figure 5. An ear thermometer uses an infrared sensor to determine the temperature of the eardrum.
These usually use a cylindrical light pipe to accept tympanum radiation and a thermal detector for simplicity. The signal is relatively large and is based on body temperature and an emissivity of about one over a large field of view.
There must be a reference temperature and software for conversion of the radiant signal to temperature. Two approaches have been used: chopping the signal and using lock-in techniques, and measuring some feature of the rise time.
The geometry is a person in the forward hemisphere, more or less. Imaging is not necessary, and resolution is determined the same way as with an aircraft warning device. The time constant need be only about 1 s, and the signal is a person. Typical devices (door openers, lighting illuminators, automobile trunk childproofers) have used pyroelectric detectors and simple, plastic Fresnel lenses to accomplish change detection.
A toilet flusher uses a similar device but detects the departure, rather than the entry, of the intruder. This is a simple device, with a small field of a few degrees and resolution the same as the field. The time constant is about 1 s, and range is less than 1 m. The signal is a person. We know the target, and the process is change detection. It’s no wonder that they are ubiquitous!
A portion of this article was taken from another of my publications, Infrared Design Examples, SPIE Press, which covers some of these examples in much more detail.
1. Hudson. R.D. (1969). Infrared System Engineering. Wiley; I.J. Spiro and M. Schlessinger (1989). Infrared Technology Fundamentals. Dekker; E.L. Dereniak and G.D. Boreman (1996). Infrared Detectors and Systems. Wiley; W.L. Wolfe and G.J. Zissis (1978). The Infrared Handbook. ERIM and SPIE; W.L. Wolfe (1997). Introduction to Infrared System Design. SPIE Press.
2. Wolfe. W.L. (1999). Infrared Design Examples. SPIE Press.
3. Fischer, R.E. and B. Tadic-Galeb (2000). Optical System Design. McGraw-Hill; W.J. Smith, Modern Optical Engineering. McGraw-Hill.
4. Jones, L. (1995). Reflective and Catadioptric Objectives. In M. Bass et al, eds., Handbook of Optics. McGraw-Hill; M.G. Turner, Reflective and Catadioptric Objectives in W.L. Wolfe, Ed. Optical Engineer’s Desk Reference, Optical Society of America and the International Society for Optical Engineering, 2003.
5. Code V, Oslo, Zemax, for instance.