Search
Menu
Meadowlark Optics - SEE WHAT

Tactical Airborne Reconnaissance Goes Dual-Band and Beyond

Facebook X LinkedIn Email
Multispectral imaging technologies are satisfying the need for a "persistent" look at the battlefield.

André G. Lareau

The military action following the events of Sept. 11 has put intelligence, surveillance and reconnaissance in the spotlight. It has highlighted the need to persistently monitor a battlefield to determine exactly who and what are there. Terrorists have become more successful in developing not only unique methods of destruction, but also more sophisticated means of eluding detection.

Tactical reconnaissance and surveillance cameras must be able to find enemy targets by day or by night, whether they are moving, fixed or camouflaged. Intelligence officers have discovered that infrared imagery offers additional useful information when it is correlated with visible imagery.


Figure 1.
An F/A-18 E/F equipped with a dual-band tactical reconnaissance camera flies over the Pentagon in an October 2001 demonstration. Courtesy of the US Navy. Image by Randy Hepp.


For example, infrared imaging can be used to expose the fuel status of an aircraft on the runway. A daytime, visible-spectrum image of the same aircraft would offer information about external details, such as the plane’s markings and paint scheme. A dual-band, common-aperture camera, however, enables the precision registration of the two images through a process called fusion, which frequently yields more information than is possible by evaluating the images separately (Figures 1 and 2).


Figure 2.
Dual-band, common-aperture imagery detects targets by day or night. The images were taken simultaneously with Recon/Optical Inc.’s CA-270, mounted in a P-3 aircraft, from an altitude of 10,000 feet during a flight test with the Naval Research Laboratory in December 2001. The camera’s visible channel (left) uses a 25-megapixel CCD detector, and the IR channel (right) uses a 3- to 5-μm band, four-megapixel focal plane array.


Image analysts are demanding this added dimension of content to confirm a suspected target. The technology in these dual-band cameras pushes the limits of focal plane array development, semiconductor fabrication, airborne image processing, stabilization accuracy and optical system performance.

Visible-spectrum silicon CCD arrays of 25 megapixels and larger have been in service for several years. The arrays collect large areas of reconnaissance imagery in a single frame. After they were deployed in Bosnia during the mid-1990s, battlefield commanders sought the rapid imaging of even larger areas. In 1996, this led to the digital step-frame camera, in which a pechan prism and scan mirror in front of the camera body allow imagery to be collected from various depression angles as a plane flies over the battlefield.

To be comparable with synthetic aperture radar, the arrays must offer coverage areas of 10,000 square nautical miles per hour. The addition of a precision stepping mechanism in front of the traditional framing camera enables the imaging of much larger areas, depending on the aircraft’s altitude and speed.

The digital step-frame camera captures a mosaic pattern of images, which are then electronically stitched together. In this manner, high-speed reconnaissance aircraft can take multiple across-line-of-flight images, resulting in collection capabilities of up to 10,000 square nautical miles per hour from altitudes of about 25,000 feet and at typical velocities of 480 knots.

Massive IR arrays

To create a world in which there is no place for an enemy to hide, one must be able to find targets at night. Thus, massive IR arrays that can capture nighttime images are needed (Figure 3).


Figure 3.
A nighttime IR image exposes the interior of an aircraft’s wings, enabling an intelligence officer to determine whether it is fueled for launch.


Eastman Kodak Co. fabricated the first of this type of array for Recon/ Optical Inc. from 1996 to 1998. The 1968 x 1968-element PtSi array was wafer-scale — that is, 60 mm on a side — so that each 4-in. silicon wafer yielded one focal plane array. These arrays have been integrated into a CA-265 IR framing camera and a CA-270 dual-band camera for flight tests sponsored by the US Naval Research Laboratory in Washington and featured quantum efficiencies of approximately 7 percent, with NEDTs of 0.1 °C.

It was realized, however, that a higher quantum efficiency was required to meet the demanding specifications for airborne tactical reconnaissance. Using an indium bump-bonding process on a silicon CMOS readout integrated circuit, Cincinnati Electronics (now CMC Electronics Cincinnati Inc.) fabricated a wafer-scale, high-quantum-efficiency array that is being integrated into the latest generation of dual-band cameras (Figure 4).


Figure 4.
Using indium bump bonds on a silicon CMOS readout integrated circuit (left) boosts the quantum efficiencies of InSb IR focal plane arrays. Courtesy of CMC Electronics Cincinnati Inc.



PowerPhotonic Ltd. - Coherent Beam 4/24 MR
The final element necessary for dual-band operation is a lens system permitting coincident collection of both visible and IR images. This is illustrated in a catadioptric lens system in which both optical channels are active, yielding true simultaneous visible and IR imaging (Figure 5).


Figure 5.
A lens system that enables the simultaneous collection of visible and IR images is necessary for high-performance dual-band operation. In this camera, the reimaging optics in the visible channel can be varied from 1:1 to 2:1, yielding a total effective focal length of 50 to 100 in. The resolution of the IR channel depends only on the 50-in.-focal-length catadioptric primary lens.


The layout includes a 50-in.-focal-length catadioptric primary lens that is reimaged into the visible and IR channels through a CaFl beam divider/prism. The catadioptric primary lens has an elliptic first surface and a parabolic secondary mirror. The 1:1 reimaging optics in the IR channel produce an image resolution that is dependent solely on the primary lens. The reimaging optics in the visible channel can be varied from 1:1 to 2:1 to yield a total effective focal length of 50 to 100 in. Because the pupil diameter of the camera is fixed, the aperture varies proportionally with the change in focal length (Table 1).

For pointing, the camera moves the primary mirror by up to ±8° in azimuth. The camera body rolls to provide depression-angle coverage from horizon-to-horizon through nadir. The camera can be cued and pointed either by preplanned/preprogrammed mission data or by operator intervention. Once cued, the time required to reach any point in the field from any other point is less than 2 seconds. The camera pointing accuracy is within ±0.2° in depression and ±0.2° in azimuth, and a solid-state stabilization system provides stabilization in the roll and azimuth axes so aircraft motion does not degrade performance.

Future technologies

Further technology is under development to overcome camouflage, concealment and deception. Instead of looking for targets in two spectra, airborne imaging spectrometers will image in dozens, perhaps hundreds (Figure 6).


Figure 6. Spectrometer-based imaging systems represent the next step in tactical surveillance. Such systems will offer spectral as well as spatial information about a potential target.

These “hyperspectral” cameras have been around for years, but their lack of robustness and their limited capability made them impractical for tactical reconnaissance. This, however, is changing as higher-speed electronics, precision stabilization and pointing, sophisticated diffraction optics and larger focal plane arrays are becoming more available and affordable.

In these tactical imaging systems, a slit is placed at the focal plane of the primary lens. As a mirror scans an image across the slit, a diffraction grating within the spectrometer spreads the spectrum of each line of the image and projects it onto a focal plane array, which can be sensitive to a broad range of spectral energy from the visible to the far-IR. For each “line” coming through the slit, a frame of data is recorded that represents the spatial and spectral dimensions of the image. As the data is recorded, an “image cube” is created that represents the scene with a complete spectral signature for each spatial component.

Still newer technologies are on the horizon. Advanced CMOS image-processing chips are being manufactured that will enable teraFLOP-speed processing of dual-band imagery in real time, aboard the aircraft. This will provide onboard fusion of dual-band imagery, as well as enhanced features such as moving-target indications or real-time cueing against reference target data. Precision geolocation information of cued targets determined from each fused image will embed global positioning system tags with each target, enabling true “sensor-to-shooter” operations.

It will no longer be necessary for data to be passed to a ground station to be turned into targeting information. The process will occur at the camera, enabling the immediate detection and targeting of an enemy. Networks of cameras and weapons platforms will fan out across wide surveillance areas.

Through a combination of onboard and offboard systems, such networks will offer a “persistent” look at the battlefield, satisfying the need for information by day or night.

Meet the author

André G. Lareau is senior vice president of advanced technology and programs at Recon/Optical Inc. in Barrington, Ill. He earned a BS in electrical engineering at the University of Illinois in Urbana and a master’s degree in engineering management from Northwestern University. He has been with Recon/Optical for more than 20 years and holds seven US patents relating to the application of electro-optical imaging to tactical reconnaissance.

Published: July 2002
defenseenergyFeaturesindustrialSensors & Detectorsspectroscopy

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.