Close

Search

Search Menu
Photonics Media Photonics Marketplace Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics ProdSpec Photonics Handbook

Algorithm Helps Autonomous Vehicles to Recognize Terrain in All Seasons

Facebook Twitter LinkedIn Email
An algorithm developed at Caltech enables autonomous vehicles to determine their location simply by looking at the terrain around them. For the first time, the technology works regardless of season, its developers said.

The general process around which the current work (and algorithm) stems was developed in the 1960s and called visual terrain-relative navigation (VTRN). By comparing nearby terrain to high-resolution satellite images, autonomous systems can determine their location.

The problem with the current generation of VTRN is that the terrain must closely match the images in its database for the navigation method to work effectively. Anything that alters or obscures the environment, such as snow-cover or fallen leaves, causes the images to not match up. Without a database of landscape images under every conceivable condition, VTRN systems become easily confused.

A team from the lab of Soon-Jo Chung, Bren Professor of Aerospace and Control and Dynamical Systems and research scientist at the NASA Jet Propulsion Laboratory, turned to deep learning and artificial intelligence to remove seasonal content that hinders current VTRN systems.

“The rule of thumb is that both images — the one from the satellite and the one from the autonomous vehicle — have to have identical content for current technologies to work. The differences that they can handle are about what can be accomplished with an Instagram filter that changes an image’s hues,” said Anthony Fragoso, a Caletch lecturer and lead author of the paper. “In real systems, however, things change drastically based on season because the images no longer contain the same objects and cannot be directly compared.”

Most computer vision strategies rely on human annotators who carefully curate large data sets to teach an algorithm how to recognize what it is seeing. The new process uses “self-supervised learning.” In self-supervised learning, the algorithm does this on its own by looking for patterns by analyzing details and features that may be missed by humans.


Supplementing the current generation of VTRN with the new system yielded more accurate localization: In one experiment the researchers attempted to localize images of summer foliage against winter leaf-off imagery using a correlation-based VTRN technique. They found that 50% of attempts resulted in navigation failures. In contrast, inserting the new algorithm into the VTRN worked far better; 92% of attempts were correctly matched. The remaining 8% could be identified as problematic in advance and then easily managed using other established navigation techniques.

“Computers can find obscure patterns that our eyes can’t see and can pick up even the smallest trend,” Lee said. VTRN was in danger of becoming an infeasible technology in common but challenging environments, he said. “We rescued decades of work in solving this problem.”

Beyond the utility for autonomous drones on Earth, the system can also be applied to space missions. The entry, descent, and landing systems on JPL’s Mars 2020 Perseverance rover mission, for example, used VTRN for the first time on the Red Planet to land at the Jezero Crater, a site previously thought to be too hazardous for safe entry. With rovers like Perseverance, “a certain amount of autonomous driving is necessary,” Chung said, “since transmission could take 20 minutes to travel between Earth and Mars, and there is no GPS on Mars.” The team considered the Martian polar regions that also have intense seasonal changes, similar to Earth. The new system could allow for improved navigation to support scientific objects, including the search for water.

Next, the team intends to expand the technology to account for changes in the weather. If successful, the work could help improve navigation systems for driverless cars.

The research was published in Science Robotics (www.doi.org/10.1126/scirobotics.abf3320).

Vision-Spectra.com
Jun 2021
GLOSSARY
machine vision
Interpretation of an image of an object or scene through the use of optical noncontact sensing mechanisms for the purpose of obtaining information and/or controlling machines or processes.
Research & Technologymachine visioncomputer visionmachine learningvisual terrain-relative navigationVTRNautonomousautonomous dronesautonomous vehicleslocationlocation recognition systemimage recognitionCaltechSoon-Jo ChungAnthony FragosoAmericasThe News Wire

Submit a Feature Article Submit a Press Release
Terms & Conditions Privacy Policy About Us Contact Us
Facebook Twitter Instagram LinkedIn YouTube RSS
©2023 Photonics Media, 100 West St., Pittsfield, MA, 01201 USA, [email protected]

Photonics Media, Laurin Publishing
x We deliver – right to your inbox. Subscribe FREE to our newsletters.
We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.