Search
Menu

Imager Could Help Autonomous Vehicles See Around Corners

Facebook X LinkedIn Email
An around-the-corner camera system built by engineers at Stanford University uses a wave-based image formation model to allow non-line-of-sight (NLOS) imaging. The system builds upon previous around-the-corner cameras developed by this team, but is able to capture more light from a greater variety of surfaces and see wider and farther away than previous versions, making it more practical for real-life use. It is also fast enough to monitor out-of-sight movement.

NLOS camera system, Stanford University.
Objects — including books, a stuffed animal, and a disco ball — in and around a bookshelf tested the system's versatility in capturing light from different surfaces in a large-scale scene. Courtesy of David Lindell.

To capture around-the-corner images, the system uses a powerful laser to scan a wall opposite the scene of interest. Light bounces off the wall, hits the objects in the scene, and bounces back to the wall and the camera sensors. By the time the laser light reaches the camera, only specks remain, but the sensor captures every speck and sends them to a highly efficient algorithm that deciphers the image.

The algorithm, also developed by the researchers, interprets bouncing light as waves emanating from hidden objects, similar to the way that seismic imaging systems bounce sound waves off underground layers of earth to learn what’s beneath the surface. This reconfigured algorithm, inspired by inverse methods used in seismology, improved the camera’s ability to image large scenes containing various materials.

To make sure the system would be practical for real-world applications, the researchers used hardware, an imaging style, and scanning and image processing speeds that are commonly found in autonomous car vision systems.

NLOS imaging system, Stanford University.
The around-the-corner camera’s near-real-time reconstruction of researcher David Lindell moving around in a high visibility tracksuit. Courtesy of David Lindell.

The system can scan at four frames per second. It can reconstruct a scene at speeds of 60 frames per second on a computer with a graphics processing unit. The new NLOS imaging system was able to record room-size scenes outdoors under indirect sunlight and scan persons wearing retroreflective clothing at interactive rates.


Someday, the researchers hope that superhuman vision systems could help autonomous cars and robots operate even more safely than they would with human guidance. “People talk about building a camera that can see as well as humans for applications such as autonomous cars and robots, but we want to build systems that go well beyond that,” said professor Gordon Wetzstein. “We want to see things in 3D, around corners, and beyond the visible light spectrum.”

Being able to see real-time movement from otherwise invisible light bounced around a corner was a thrilling moment for this team, but a practical system for autonomous cars or robots will require further enhancements. “It’s very humble steps,” Wetzstein said. “The movement still looks low-resolution and it’s not superfast, but compared to the state-of-the-art last year it is a significant improvement.”

The team hopes to move toward testing its system on autonomous research cars, while looking into other possible applications such as medical imaging that can see through tissues. Among other improvements to speed and resolution, the researchers will also work at making their system even more versatile to address challenging visual conditions that drivers encounter, such as fog, rain, sandstorms, and snow.

The research was presented at SIGGRAPH 2019, July 28 to Aug. 1 in Los Angeles (http://www.computationalimaging.org/publications/nlos-fk/).


Inspired by inverse methods used in seismology, the researchers adapted a frequency-domain method, f-k migration, for solving the inverse NLOS problem. Courtesy of Stanford Computational Imaging Lab.

 


Published: August 2019
Glossary
machine vision
Machine vision, also known as computer vision or computer sight, refers to the technology that enables machines, typically computers, to interpret and understand visual information from the world, much like the human visual system. It involves the development and application of algorithms and systems that allow machines to acquire, process, analyze, and make decisions based on visual data. Key aspects of machine vision include: Image acquisition: Machine vision systems use various...
Research & TechnologyeducationAmericasImagingLasersLight SourcesOpticsSensors & Detectorscamerassmart camerasnon-line-of-sight camerasBiophotonicsdefensemachine visionautomotiveautonomous vehiclesThe News Wire

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.
BioPhotonics Conference 2024LIVE NOW: Live Cell Imaging Applications Drive Component and Wavelength Selection for Hyperspectral Microscopy X