Search
Menu
Bristol Instruments, Inc. - 872 Series High-Res 4/24 LB

Algorithm Helps Autonomous Vehicles to Recognize Terrain in All Seasons

Facebook X LinkedIn Email
An algorithm developed at Caltech enables autonomous vehicles to determine their location simply by looking at the terrain around them. For the first time, the technology works regardless of season, its developers said.

The general process around which the current work (and algorithm) stems was developed in the 1960s and called visual terrain-relative navigation (VTRN). By comparing nearby terrain to high-resolution satellite images, autonomous systems can determine their location.

The problem with the current generation of VTRN is that the terrain must closely match the images in its database for the navigation method to work effectively. Anything that alters or obscures the environment, such as snow-cover or fallen leaves, causes the images to not match up. Without a database of landscape images under every conceivable condition, VTRN systems become easily confused.

A team from the lab of Soon-Jo Chung, Bren Professor of Aerospace and Control and Dynamical Systems and research scientist at the NASA Jet Propulsion Laboratory, turned to deep learning and artificial intelligence to remove seasonal content that hinders current VTRN systems.

“The rule of thumb is that both images — the one from the satellite and the one from the autonomous vehicle — have to have identical content for current technologies to work. The differences that they can handle are about what can be accomplished with an Instagram filter that changes an image’s hues,” said Anthony Fragoso, a Caletch lecturer and lead author of the paper. “In real systems, however, things change drastically based on season because the images no longer contain the same objects and cannot be directly compared.”

Most computer vision strategies rely on human annotators who carefully curate large data sets to teach an algorithm how to recognize what it is seeing. The new process uses “self-supervised learning.” In self-supervised learning, the algorithm does this on its own by looking for patterns by analyzing details and features that may be missed by humans.

PI Physik Instrumente - Space Qualified Steering ROS 16-30 MR

Supplementing the current generation of VTRN with the new system yielded more accurate localization: In one experiment the researchers attempted to localize images of summer foliage against winter leaf-off imagery using a correlation-based VTRN technique. They found that 50% of attempts resulted in navigation failures. In contrast, inserting the new algorithm into the VTRN worked far better; 92% of attempts were correctly matched. The remaining 8% could be identified as problematic in advance and then easily managed using other established navigation techniques.

“Computers can find obscure patterns that our eyes can’t see and can pick up even the smallest trend,” Lee said. VTRN was in danger of becoming an infeasible technology in common but challenging environments, he said. “We rescued decades of work in solving this problem.”

Beyond the utility for autonomous drones on Earth, the system can also be applied to space missions. The entry, descent, and landing systems on JPL’s Mars 2020 Perseverance rover mission, for example, used VTRN for the first time on the Red Planet to land at the Jezero Crater, a site previously thought to be too hazardous for safe entry. With rovers like Perseverance, “a certain amount of autonomous driving is necessary,” Chung said, “since transmission could take 20 minutes to travel between Earth and Mars, and there is no GPS on Mars.” The team considered the Martian polar regions that also have intense seasonal changes, similar to Earth. The new system could allow for improved navigation to support scientific objects, including the search for water.

Next, the team intends to expand the technology to account for changes in the weather. If successful, the work could help improve navigation systems for driverless cars.

The research was published in Science Robotics (www.doi.org/10.1126/scirobotics.abf3320).

Published: June 2021
Glossary
machine vision
Machine vision, also known as computer vision or computer sight, refers to the technology that enables machines, typically computers, to interpret and understand visual information from the world, much like the human visual system. It involves the development and application of algorithms and systems that allow machines to acquire, process, analyze, and make decisions based on visual data. Key aspects of machine vision include: Image acquisition: Machine vision systems use various...
computer vision
Computer vision enables computers to interpret and make decisions based on visual data, such as images and videos. It involves the development of algorithms, techniques, and systems that enable machines to gain an understanding of the visual world, similar to how humans perceive and interpret visual information. Key aspects and tasks within computer vision include: Image recognition: Identifying and categorizing objects, scenes, or patterns within images. This involves training algorithms...
machine learning
Machine learning (ML) is a subset of artificial intelligence (AI) that focuses on the development of algorithms and statistical models that enable computers to improve their performance on a specific task through experience or training. Instead of being explicitly programmed to perform a task, a machine learning system learns from data and examples. The primary goal of machine learning is to develop models that can generalize patterns from data and make predictions or decisions without being...
Research & Technologymachine visioncomputer visionmachine learningvisual terrain-relative navigationVTRNautonomousautonomous dronesautonomous vehicleslocationlocation recognition systemimage recognitionCaltechSoon-Jo ChungAnthony FragosoAmericasThe News Wire

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.