Deep Neural Network Guides Drone to Smooth Landing

Facebook X LinkedIn Email
Researchers at the California Institute of Technology (Caltech) have collaborated on a system that uses a deep neural network (DNN) to help autonomous drones “learn” how to land more safely and quickly, while consuming less power. The system, dubbed the Neural Lander, is a learning-based controller that tracks the position and speed of the drone and modifies its landing trajectory and rotor speed accordingly to achieve a steady landing. To make sure that the drone flies smoothly, the team employed spectral normalization to even out the neural network’s outputs so that it does not make wildly varying predictions as inputs or conditions shift. 

Use of DNN and spectral normalization enables drones to land safely. Caltech.
The Neural Lander system is tested in the Aerodrome, a three-story drone arena at Caltech’s Center for Autonomous Systems and Technologies. Courtesy of Caltech.

The researchers measured improvements in landing by examining the drone’s deviation from an idealized trajectory in 3D space. When tested, the Neural Lander was found to decrease vertical error by 100%, allowing for controlled landings, and to reduce lateral drift by up to 90%. It achieved actual landing rather than getting stuck about 10 to 15 cm above the ground, as unmodified conventional flight controllers often do. In addition, the Neural Lander produced a much smoother transition as the drone went from skimming across the table to flying in free space.

“This interdisciplinary effort brings together experts from machine learning and control systems. We have barely started to explore the rich connections between the two areas,” said professor Anima Anandkumar.

The researchers have filed a patent on the new system, which could be used for projects currently under development at Caltech’s Center for Autonomous Systems and Technologies, including an autonomous medical transport vehicle that could land in hard-to-reach locations such as in gridlocked traffic.

The research was presented at the Institute of Electrical and Electronics Engineers (IEEE) International Conference on Robotics and Automation May 22, 2019, in Montreal (

Engineers and computer scientists at Caltech's Center for Autonomous Systems and Technologies (CAST) use a deep neural network to help autonomous drones compensate for complex turbulence to skim and land more efficiently. Courtesy of Caltech.

Published: June 2019
machine vision
Machine vision, also known as computer vision or computer sight, refers to the technology that enables machines, typically computers, to interpret and understand visual information from the world, much like the human visual system. It involves the development and application of algorithms and systems that allow machines to acquire, process, analyze, and make decisions based on visual data. Key aspects of machine vision include: Image acquisition: Machine vision systems use various...
deep learning
Deep learning is a subset of machine learning that involves the use of artificial neural networks to model and solve complex problems. The term "deep" in deep learning refers to the use of deep neural networks, which are neural networks with multiple layers (deep architectures). These networks, often called deep neural networks or deep neural architectures, have the ability to automatically learn hierarchical representations of data. Key concepts and components of deep learning include: ...
artificial intelligence
The ability of a machine to perform certain complex functions normally associated with human intelligence, such as judgment, pattern recognition, understanding, learning, planning, and problem solving.
Research & TechnologyeducationCaltechAmericasOpticsSensors & Detectorsneural networksmachine visiondeep learningdronesartificial intelligencedrone controllersThe News Wire

We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.