Search
Menu
BAE Systems Sensor Solutions - Fairchild - Thermal Imaging Solutions 4/24 LB

Computational Metaoptics Enable Broadband Imaging Applications

Facebook X LinkedIn Email
Metasurface-based flat optics transform the phase, amplitude, and polarization of incident light in ways that can exceed traditional refractive and diffractive optics. Post-processing software corrects aberrations and mimics the function of fast refractive lenses.

Shane Colburn, Alan Zhan, Arka Majumdar, And John Fijol, Tunoptix

Although camera technology has advanced considerably in the last few decades, there are constant demands for higher resolution, wider field of view, and full-color operation for many consumer, medical, industrial, and military applications. At the same time, these systems require lower weight, smaller size, and reduced cost while conforming to tight manufacturing tolerances. This has made state-of-the-art cameras into incredibly complicated systems, intricately optimized to balance complexity and performance. In many cases, these demands cannot all be simultaneously satisfied, as existing refractive lenses are often bulky, expensive, and subject to manufacturing constraints that limit performance.

A promising route for reducing the complexity and size of these systems is to use metasurface-based optics, or metaoptics. Metaoptics are ultrathin, flat components comprising arrays of subwavelength-spaced optical nanoantennas1. These nanoantennas enable transformations of the phase, amplitude, and polarization of incident light in ways beyond the capabilities of traditional refractive and diffractive optics. Because of their subwavelength spacing and individually tailorable nanoantennas, metaoptics exhibit an extremely large number of degrees of freedom, facilitating miniaturized systems and advanced functionalities. In recent years, numerous publications and demonstrations have documented the use of metaoptic lenses, beam deflectors, holograms, beam-shaping optics, and various other systems.

One of the major advantages of metaoptics is that they are manufacturable using high-volume semiconductor processes, similar to how electronic chips are produced. Rather than diamond turning curved surfaces or forming specialized molds for plastic lenses, metaoptics are manufactured at the wafer level, offering economies of scale at thousands of devices per wafer. Advancements in semiconductor processing, combined with the capabilities of metaoptics, are driving extreme miniaturization of optical components (Figure 1).

Figure 1. A diced metaoptic, fabricated using semiconductor processing, next to a grain of rice. Metaoptics have the potential to revolutionize imaging systems by making miniaturized cameras possible while maintaining a high level of performance. Courtesy of Luocheng Huang.

 
  Figure 1. A diced metaoptic, fabricated using semiconductor processing, next to a grain of rice. Metaoptics have the potential to revolutionize imaging systems by making miniaturized cameras possible while maintaining a high level of performance. Courtesy of Luocheng Huang.

Metaoptics have the potential to significantly advance the modern camera, yet outstanding challenges remain that limit their practicality. One of these key challenges is that a metaoptic’s operation can strongly depend on wavelength, producing significant chromatic aberration.

These chromatic aberrations represent a major barrier to commercial adoption of the technology. One general perspective on imaging system design that proposes to solve the problem treats the imaging optic as a scene-to-sensor information converter. State-of-the-art results have been generated by leveraging metaoptics within a computational imaging system, enabling high-quality color images — via a single metaoptic — that approach the performance of a six-element refractive compound lens. This modified design approach can accommodate a range of camera systems for a number of potential applications.

Cause of chromatic aberrations

Because metaoptics are diffractive elements, their operation can depend strongly on the incident wavelength. Physically, this dependence arises because the nanoantennas’ configuration in a metasurface lens (metalens) depends on the design wavelength, and deviations from this wavelength induce wavefront errors that cause the focal plane to shift2. This chromatic focal shift manifests as color blur and rainbow-like artifacts in images under broadband illumination. These aberrations represent a significant barrier to adoption for major consumer, industrial, medical, and military applications that require high-quality color images.

Metalenses that simultaneously focus laser illumination at red, green, and blue wavelengths exist, but such an approach is only suitable for situations where the incident spectrum is narrowband and fixed, as in projection or VR systems. For intermediate wavelengths, the blurring is significant, precluding use with broadband light. Another method, called dispersion engineering, exploits nanoantennas that are engineered to simultaneously focus light and cancel the inherent chromatic aberration of the metalens. This method enables broadband focusing of light at the same distance, but it is unable to simultaneously accommodate large apertures, fast speeds, or extended wavelength ranges. In practice, dispersion engineering typically constrains broadband metalenses to diameters of tens of microns if the full visible range is accommodated. A metalens with an aperture of only tens of microns, however, cannot meet the demands of most applications. Mobile consumer devices, for example, typically have lenses with apertures in the millimeter range to achieve the required performance.

Rethinking imaging lens design

Inherent to these design methods for metalenses, as well as for traditional optics, is an underlying assumption that the best way to increase performance is to focus light to as tight a spot as possible across the desired wavelength range and field of view. The end goal is conceptually attractive: Focus light to a spot, and the smaller the spot, the higher the achievable resolution.

A more general perspective of the imaging process considers a lens to be a component that transfers information from an object to an image sensor3, where the information in a scene corresponds to all the individual points on objects for each incidence angle and wavelength. How exactly a lens transfers the information depends on the lens’s point spread function (PSF), which describes how a point in a scene maps onto a sensor, fully characterizing its intensity and shape when accounting for diffraction and aberrations. In this manner, the lens acts as an information converter from object to image space, and the PSF describes the mathematical basis in which image information is expressed. Traditional lenses and metalenses try to realize a point-like PSF that is as compact as possible, with the goal of producing a replica of a scene directly on a sensor, such that each object point is converted to an equivalent image point.

In practice, such a basis of point-like PSFs is difficult to realize, entailing the use of multiple elements to correct aberrations, or dispersion-engineered metalens nanoantennas that have limited use cases. Furthermore, this basis is often wholly divorced from the fact that, with many modern cameras, raw sensor data is frequently processed by software. Unless no post-processing is tolerable at all or the captured data is extremely noisy, a lens design need not conform to the traditional basis of representing object points by equivalent image points. There are other information bases that a designer can select. A PSF can take various forms, sometimes entailing simpler optics (e.g., fewer elements or shorter track length) than those required for a point-like basis (Figure 2).

 Figure 2. A traditional lens design (left) tries to realize a compact focal spot, leveraging complex configurations of aberration-correcting lenses to ensure that the spot size is minimized over the desired field of view and wavelength range. This approach ignores the regular presence of post-processing software and can create overengineered lenses. An alternate approach (right) leverages the degrees of freedom of a metaoptic to transform the light so that the sensor image appears blurry but still contains the necessary information for high-quality image reconstruction, while cutting system size and complexity. Courtesy of Tunoptix.

 
  Figure 2. A traditional lens design (left) tries to realize a compact focal spot, leveraging complex configurations of aberration-correcting lenses to ensure that the spot size is minimized over the desired field of view and wavelength range. This approach ignores the regular presence of post-processing software and can create overengineered lenses. An alternate approach (right) leverages the degrees of freedom of a metaoptic to transform the light so that the sensor image appears blurry but still contains the necessary information for high-quality image reconstruction, while cutting system size and complexity. Courtesy of Tunoptix.

While a raw image directly captured on the sensor using such an alternative PSF may appear blurry to the human eye, the original scene content is restorable using image processing along with a priori knowledge of the PSF. In such a system, sensor noise can limit reconstruction quality; however, while noise is a critical consideration, pixel architectures have advanced considerably over the years, mitigating noise. And when coupled with state-of-the-art nonlinear denoising algorithms, high-resolution image reconstructions are becoming increasingly feasible even in lower signal-to-noise ratio conditions. With these advancements in sensor architecture, denoising, and the already regular inclusion of post-processing of raw captures, the emphasis on using a point-like image basis leads to over- engineered lenses, where extra elements are added, track lengths are increased, tolerances are tightened, and costs rise.

Realizing broadband imaging

This alternate perspective on imaging has provided a pathway to realizing broadband imaging using metaoptics. Researchers have previously shown that by modifying a metalens — so that, in addition to focusing, it also extends the depth of focus to compensate for chromatic focal shift — the metaoptic’s PSF becomes nearly identical as a function of wavelength across the visible spectrum4. The resultant PSF comprises an L-shaped pattern with multiple lobes spread across a neighborhood of pixels, blurring the sensor image. The resulting raw frame, however, had the property that it was uniformly blurry for all visible wavelengths, expressed in the basis of the metaoptic’s PSF. By combining an extended depth of focus (EDOF) metaoptic with post-processing software, the sensor image is deconvolved to convert back into a point-like basis, providing a focused, full-color image under full visible spectrum illumination.

While a standard metalens exhibits extremely blurry frames with substantial rainbow artifacts, the color content in a metaoptic EDOF system is far better focused (Figure 3). Such a system unfortunately exhibits artifacts and visible noise that limit image quality. While the EDOF metaoptic’s PSF is nearly identical for all visible wavelengths, this PSF is still not the only possible basis choice. A recent work showed that by jointly optimizing the metaoptic’s structure and the post-processing software’s parameters, the final image quality is significantly improved5. This approach incorporated a model of the light’s interaction with the metasurface, the sensor capture, and the effect of the post-processing algorithm, enabling optimal matching of the metaoptic’s structure with the post-processing settings. Experimental reconstructions with this single-metaoptic system achieved image quality near that of a six-element refractive lens, while providing a 555,000× volume reduction (Figure 3). To date, this imaging demonstration has utilized a metaoptic with the largest combination of aperture size and field of view while supporting full visible spectrum operation at f/2.

Lambda Research Optics, Inc. - CO2 Replacement Optics

Figure 3. Experimental captures using a conventional metalens exhibit severe color blur (a). Images produced by an extended depth of focus (EDOF) metaoptic after reconstruction are far sharper but exhibit artifacts and noise (b). Images from a broadband computational metaoptic, where the metaoptic and post-processing software are optimally matched, are of far higher quality (c). They begin to approach the image quality achieved by a six-element refractive, while using only a single metaoptic and providing a 555,000× reduction in volume (d). Courtesy of Ethan Tseng.

 
  Figure 3. Experimental captures using a conventional metalens exhibit severe color blur (a). Images produced by an extended depth of focus (EDOF) metaoptic after reconstruction are far sharper but exhibit artifacts and noise (b). Images from a broadband computational metaoptic, where the metaoptic and post-processing software are optimally matched, are of far higher quality (c). They begin to approach the image quality achieved by a six-element refractive, while using only a single metaoptic and providing a 555,000× reduction in volume (d). Courtesy of Ethan Tseng.

Leveraging metaoptics effectively

Imaging in this manner is not possible with a standard metalens. A metalens’s PSF varies considerably with wavelength, producing very large, defocused PSFs for off-design wavelengths, and object points become blurred beyond recovery at these wavelengths. Resolution information not captured by a lens for a given wavelength cannot be reconstructed. Naively deconvolving a standard metalens image captured under broadband illumination is akin to dividing by zero, introducing artifacts and noise that do not represent physical scene content. If a multiwavelength metalens was used that had at least one wavelength in focus per color channel of an RGB sensor, this would enable only successful recovery of information at those particular wavelengths and this method would not reconstruct fine object features at off-design wavelengths.

Deconvolution is applicable to refractive systems as well, though the same limitations apply in terms of only being able to reconstruct information that was actually collected by the lens. Metaoptics combined with computation, however, present a compelling platform with which to enhance refractive lenses. One key advantage is that metaoptics offer a high number of degrees of freedom relative to refractives, which are subject to typical manufacturing constraints. While many mass-produced refractives must contend with stringent limitations on surface curvature and are typically rotationally symmetric, metaoptics do not have these restrictions. A metaoptic instead can impart a subwavelength-resolution profile that mimics the functionality of a fast refractive, all while remaining flat and entailing the same manufacturing complexity as that of a simple metalens. Such design freedom constitutes a much wider range of realizable PSFs than with symmetric refractives alone, and by combining them into hybrid refractive/metaoptic systems using jointly optimized design of the reconstruction software, even greater performance enhancements may be possible.

Tunoptix, a Seattle startup spun out of and collaborating with the University of Washington, is commercializing this broadband computational metaoptics technology for high-performance camera systems. The company’s approach is promising for a range of applications with widely varied operating specifications, offering a design approach that is relevant across many length scales in an array of consumer, medical, industrial, and space applications.

At one camera-size extreme is the miniature endoscope (Figure 4). For many endoscopes, reducing volume to minimize the invasiveness of a medical procedure is desirable. In many cases, it is also necessary to image over an extended field of view while maintaining sufficient resolution to inspect lesions or guide a surgeon’s movements. Existing endoscopes often leverage a series of refractive microlenses assembled into aberration-correcting systems that only support relatively high f-stops, increasing exposure times and limiting resolution. Broadband computational metaoptics instead offer a path to fast lenses with a single flat component, enabling higher-resolution lens profiles and more degrees of freedom compared to microlenses.

Figure 4. Miniaturized endoscopes could significantly improve health outcomes by improving diagnoses and assisting surgeons during complicated procedures. Broadband computational metaoptics will help enhance the field of view and resolution of endoscopes while reducing overall volume, to minimize the invasiveness of a procedure. Courtesy of Stock.com/romaset.

 
  Figure 4. Miniaturized endoscopes could significantly improve health outcomes by improving diagnoses and assisting surgeons during complicated procedures. Broadband computational metaoptics will help enhance the field of view and resolution of endoscopes while reducing overall volume, to minimize the invasiveness of a procedure. Courtesy of Stock.com/romaset.

At the opposite extreme are very large optics for space applications, where color information is often critical for identifying structures accurately or for detecting wildfires early using satellites. For any camera sent into space, the overall mass must be minimized to reduce the launch vehicle payload. This becomes even more critical for low-mass CubeSats and microsatellites. Cameras exploiting a single metaoptic could reduce mass and also eliminate alignment-sensitive refractive assemblies that must survive intense vibrations encountered when reaching orbit. Arrays of multiple metaoptics could then be integrated together to simultaneously capture images for frame stacking, enabling imaging of very low-surface-brightness celestial objects while only minimally increasing camera mass.

Metaoptics present a promising platform for enhancing camera performance while reducing complexity and size. Unfortunately, standard metalenses typically exhibit severe chromatic aberrations that preclude their use in imaging applications with broadband light. While researchers have attempted to solve this issue by making innovations in the design of the metaoptic itself, the techniques cannot scale to lens sizes beyond a few tens of microns while maintaining resolution. Furthermore, the underlying emphasis in camera design on focusing light to as tight a spot as possible can lead to overengineered lenses that are bulky and complex, ignoring the frequent application of post-processing to captured raw data in modern cameras.

When a metaoptic is viewed as an information converter that transfers scene content onto a sensor, design freedom is gained in that the sensor image is not restricted to being represented in the basis of compact focal spots. Instead, a designer can select a more appropriate basis for a given application, designing the lens such that its PSF is realizable by using a simpler lens, while resolution information is transformed, yet preserved, in the raw sensor data. Leveraging this approach, broadband metaoptics have been demonstrated with image quality on par with that of a six-element refractive, but with a 555,000× volume reduction. Relative to refractive lenses alone, metaoptics offer a wide range of realizable PSFs. When coupled with post-processing software, broadband computational metaoptics could significantly benefit applications at many different length scales, including submillimeter lenses for endoscopes, millimeter-scale optics for consumer applications, and large-aperture lenses for space applications.

Meet the authors

Shane Colburn, Ph.D., is director of system design at Tunoptix and an affiliate assistant professor in the Department of Electrical and Computer Engineering at the University of Washington; email: [email protected].

Alan Zhan, Ph.D., is director of optical design at Tunoptix. He received his doctorate in physics from the University of Washington; email: [email protected].

Arka Majumdar is an associate professor in the electrical and computer engineering and physics departments at the University of Washington. He is a co-founder of and technical adviser at Tunoptix; email: [email protected].

John Fijol, Ph.D., is CEO of Tunoptix. He has 25-plus years of experience developing electro-optics products at Fortune 500 companies, at early-to-late-stage startups, and through venture investing; email: [email protected].

References

1. N. Yu and F. Capasso (2014). Flat optics with designer metasurfaces. Nat Mater, Vol. 13, pp. 139-150.

2. E. Arbabi et al. (2016). Multiwavelength polarization-insensitive lenses based on dielectric metasurfaces with meta-molecules. Optica, Vol. 3, pp. 628-633.

3. D.G. Stork and M.D. Robinson (2008). Theoretical foundations for joint digital-optical analysis of electro-optical imaging systems. Appl Opt, Vol. 47, pp. B64-B75.

4. S. Colburn et al. (2018). Metasurface optics for full-color computational imaging. Sci Adv, Vol. 4, p. eaar2114.

5. E. Tseng et al. (2021). Neural nano-optics for high-quality thin lens imaging. Nat Commun, Vol. 12, p. 6493.

Published: January 2022
Glossary
machine vision
Machine vision, also known as computer vision or computer sight, refers to the technology that enables machines, typically computers, to interpret and understand visual information from the world, much like the human visual system. It involves the development and application of algorithms and systems that allow machines to acquire, process, analyze, and make decisions based on visual data. Key aspects of machine vision include: Image acquisition: Machine vision systems use various...
chromatic aberration
Chromatic aberration is an optical phenomenon that occurs when different wavelengths (colors) of light are refracted by a lens or optical system, leading to a failure to focus all colors to the same convergence point. This results in colored fringes or halos around the edges of objects, reducing the overall image quality and sharpness. Key points about chromatic aberration include: Cause: Chromatic aberration arises because different colors of light have different refractive indices when...
Featuresmetaopticscomputational imagingmachine visionnanoantennasmetalenseschromatic aberrationTunoptixImaging

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.