Bioinspired Sensor Uses Metalenses for Depth from Defocus

Facebook X LinkedIn Email
CAMBRIDGE, Mass., Nov. 4, 2019 — Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a compact sensor that can measure depth in a single shot. The sensor’s design was inspired by the specialized optics of the jumping spider, which has extraordinary depth perception. Each of the spider’s principal eyes has a few semi-transparent retinas arranged in layers. These retinas measure multiple images of prey with different amounts of blur to ascertain the distance between spider and prey. In computer vision, this type of distance calculation is known as depth from defocus.

So far, replicating nature has required large cameras with motorized internal components that can capture differently focused images over time. This has limited the speed and practical applications of a sensor based on depth from defocus.

The SEAS researchers combined multifunctional metalenses, nanophotonic components, and efficient computations to create a sensor that can efficiently measure depth from image defocus. The Capasso group had previously demonstrated metalenses that can simultaneously produce several images, each containing different information. Based on that research, the group designed a metalens that can simultaneously produce two images with different blur. “Instead of using layered retinas to capture multiple simultaneous images, as jumping spiders do, the metalens splits the light and forms two differently defocused images side-by-side on a photosensor,” researcher Zhujun Shi said.

Compact depth sensor using metalens and computational algorithms, Harvard University/SEAS.

The metalens depth sensor works in real time to capture the depth of translucent candle flames. The two images on the left are the raw images captured on the camera sensor. They are formed by the metalens and are blurred slightly differently. From these two images, the researchers compute the depth of the objects in real time. The image on the right shows the computed depth map.Courtesy of Qi Guo and Zhujun Shi/Harvard University.

The researchers coupled the metalens with off-the-shelf components to build a prototype sensor. The sensor’s current size is 4 × 4 × 10 cm, but since the metalens is only 3 mm in diameter, the overall size of the assembled sensor could be reduced with a purpose-built photosensor and housing. The researchers paired a 10-nm bandpass filter with the metalens, which is designed for monochromatic operation at 532 nm. A rectangular aperture was placed in front of the metalens to limit the field of view and prevent the two images from overlapping.

An algorithm developed by professor Todd Zickler’s group efficiently interprets the two images and builds a depth map to represent object distance.

To analyze the depth accuracy, the researchers measured the depths of test objects at a series of known distances and compared them with the true object distances. The 3-mm-diameter metalens measured depth over a 10-cm distance range, using fewer than 700 floating point operations per output pixel.

An illustration of a metalens designed for compact depth sensing. Harvard University/SEAS.
An illustration of a metalens designed for compact depth sensing. It consists of subwavelength-spaced square nanopillars. By alternating two different nanopillar patterns, visualized here in red and blue, this metalens forms two images at the same time. The two images mimic the images captured by the layered retinas in the eyes of jumping spiders. Courtesy of Qi Guo and Zhujun Shi/Harvard University.

The bioinspired design is lightweight and requires a small amount of computation compared with previous passive artificial depth sensors. The sensor’s small volume, weight, and computational requirements bring depth-sensing capabilities closer to being feasible on insect-scale platforms, such as microrobots, ingestible devices, far-flung sensor networks, and small wearable devices.

Researcher Qi Guo described the ability to design metasurfaces and computational algorithms together as “a new way of creating computational sensors.” The integration of nanophotonics and efficient computation could constitute a paradigm for design in computational sensing.

“Metalenses are a game-changing technology because of their ability to implement existing and new optical functions much more efficiently, faster, and with much less bulk and complexity than existing lenses,” professor Federico Capasso said. “Fusing breakthroughs in optical design and computational imaging has led us to this new depth camera that will open up a broad range of opportunities in science and technology.”

The research was published in the Proceedings of the National Academy of Sciences (   

Inspired by jumping spiders, Harvard University researchers developed a compact and efficient depth sensor that could be used on microrobots, in small wearable devices, or in lightweight virtual and augmented reality headsets. The device combines a multifunctional, flat metalens with an ultra-efficient algorithm to measure depth in a single shot. Courtesy of Qi Guo and Zhujun Shi/Harvard University.

Published: November 2019
A metalens, short for "metasurface lens," is a type of optical lens that uses nanostructured materials to manipulate light at a subwavelength scale. Unlike traditional lenses made of glass or other transparent materials, metalenses do not rely on the curvature of their surface to refract or focus light. Instead, they use carefully engineered patterns of nanostructures, such as nanoscale antennas or dielectric structures, to control the phase and amplitude of light across the lens's surface....
augmented reality
Augmented reality (AR) is a technology that integrates digital information, typically in the form of computer-generated graphics, images, or data, with the real-world environment in real-time. AR enhances the user's perception of the physical world by overlaying or combining digital content onto the user's view of the real world, often through devices like smartphones, tablets, smart glasses, or specialized AR headsets. Key features and principles of augmented reality: Real-time...
An SI prefix meaning one billionth (10-9). Nano can also be used to indicate the study of atoms, molecules and other structures and particles on the nanometer scale. Nano-optics (also referred to as nanophotonics), for example, is the study of how light and light-matter interactions behave on the nanometer scale. See nanophotonics.
Nanophotonics is a branch of science and technology that explores the behavior of light on the nanometer scale, typically at dimensions smaller than the wavelength of light. It involves the study and manipulation of light using nanoscale structures and materials, often at dimensions comparable to or smaller than the wavelength of the light being manipulated. Aspects and applications of nanophotonics include: Nanoscale optical components: Nanophotonics involves the design and fabrication of...
Research & TechnologyeducationAmericasthe Harvard John A. Paulson School of Engineering and Applied SciencesDepth SensormetalensImagingLight SourcesOpticsSensors & Detectorsbioinspired opticsroboticsaugmented realitynanonanophotonicslensesmetalens sensor

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.