Search Menu
Photonics Media Photonics Marketplace Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics ProdSpec Photonics Handbook
JAI Inc. - Go-X Series LB

AI Tool Uses Aerial Imaging to Scan Structures for Wildfire Damage

Facebook Twitter LinkedIn Email
The DamageMap application identifies buildings as damaged in red or not damaged in green. Researchers developed the platform to provide immediate information about structural damage following wildfires. Courtesy of Galanis et al.
The DamageMap application identifies buildings as damaged in red or not damaged in green. Researchers developed the platform to provide immediate information about structural damage following wildfires. Courtesy of Galanis et al.
A system developed by researchers from Stanford University and California Polytechnic State University (Cal Poly) called DamageMap uses aerial photography and artificial intelligence to assess wildfire damage to buildings. Rather than comparing before and after photos, the system is able to use machine learning to consider only post-fire images.

The current method of assessing damage involves people going door-to-door to check every building. Though the Stanford system is not intended to replace in-person damage classification, it may find use as a scalable supplementary tool by offering immediate results and providing the exact location of buildings identified.

“We wanted to automate the process and make it much faster for first responders or even for citizens that might want to know what happened to their house after a wildfire,” said lead author and graduate student Marios Galanis. “Our model results are on par with human accuracy.”

The team tested the technique using a variety of satellite aerial and drone photography, with results of at least 92% accuracy.

“With this application, you could probably scan the whole town of Paradise in a few hours,” said senior author G. Andrew Fricker, an assistant professor at Cal Poly, referencing the California town destroyed by the 2018 Camp Fire. “I hope this can bring more information to the decision-making process for firefighters and emergency responders, and also assist fire victims by getting information to help them file insurance claims and get their lives back on track.”

Most computational systems cannot efficiently classify building damage because the AI compares post-disaster photos with pre-disaster images that must use the same satellite, camera angle, and lighting conditions, which can be expensive to obtain or unavailable. Current hardware is unable to record high-resolution surveillance daily, so the systems are unable to provide consistent photos, the researchers said.

From the visual data the system obtains, DamageMap analyzes post-wildfire images to identify damage through features such as blackened surfaces, crumbled roofs, and the absence of structures.

“People can tell if a building is damaged or not — we don’t need the before picture — so we tested that hypothesis with machine learning,” said co-author and graduate student Krishna Rao. “This can be a powerful tool for rapidly assessing damage and planning disaster recovery efforts.”

Because the team used supervised learning to train its system, the system can be further improved by feeding it more data. The scientists tested the application using damage assessments from Paradise, Calif., after the Camp Fire, and the Whiskeytown-Shasta-Trinity National Recreation Area after the Carr Fire of 2018.

The researchers said the open-source platform can be applied to any area prone to wildfires, and they hope it could also be trained to classify damages from other disasters, such as floods or hurricanes.

“So far our results suggest that this can be generalized, and if people are interested in using it in real cases, then we can keep improving it,” Galanis said.

Galanis and Rao developed the project during Stanford’s 2020 “Big Earth Hackathon: Wildland Fire Challenge.” They then collaborated with Cal Poly researchers to refine the platform.

The research was published in the International Journal of Disaster Risk Reduction (
Sep 2021
machine vision
Interpretation of an image of an object or scene through the use of optical noncontact sensing mechanisms for the purpose of obtaining information and/or controlling machines or processes.
Extreme EnvironmentAIResearch & TechnologyimagingwildfireCalifornia wildfiressurveyingmachine visioncamerasAmericasdamage assessmentimage classificationimaging natural disastersStanfordCal Polylocation recognition systemThe News Wire

Submit a Feature Article Submit a Press Release
Terms & Conditions Privacy Policy About Us Contact Us
Facebook Twitter Instagram LinkedIn YouTube RSS
©2022 Photonics Media, 100 West St., Pittsfield, MA, 01201 USA, [email protected]

Photonics Media, Laurin Publishing
x We deliver – right to your inbox. Subscribe FREE to our newsletters.
We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.