Close

Search

Search Menu
Photonics Media Photonics Buyers' Guide Photonics Spectra BioPhotonics EuroPhotonics Vision Spectra Photonics Showcase Photonics ProdSpec Photonics Handbook
More News

Adversarial Learning Techniques Test Image Detection Systems

Facebook Twitter LinkedIn Email Comments
Engineers at Southwest Research Institute (SwRI) are finding and documenting vulnerabilities in machine learning algorithms that can make objects “invisible” to image detection systems that use deep learning.

SwRI engineers developed unique patterns that can trick object detection systems into seeing something else, seeing the objects in another location or not seeing the objects at all.
Many of today’s vehicles use object detection systems to help avoid collisions. SwRI engineers developed unique patterns that can trick these systems into seeing something else, seeing the objects in another location, or not seeing the objects at all. In this photo, the object detection system sees a person rather than a vehicle. Courtesy of Southwest Research Institute.

Deep learning systems reliably detect objects under an array of conditions and, as such, are used in myriad applications and industries, often for safety-critical uses. However, image processing systems that use deep learning algorithms can be deceived through adversarial learning techniques.

To mitigate the risk for compromise in automated image processing systems, research engineers Abe Garza and David Chambers developed adversarial learning patterns for testing the systems. When worn by a person or mounted on a vehicle, the patterns trick object detection cameras into thinking the objects aren’t there, or that they’re something else, or that they’re in another location.

SwRI researchers are developing techniques to mitigate the risk of compromise in object detection systems. Courtesy of Southwest Research Institute.
What looks like a colorful pattern to the human eye looks like a bicycle to an object detection system. While deep learning algorithms used in these systems are reliable, they can be deceived with special imagery. SwRI researchers are developing techniques to mitigate the risk of compromise in these systems. Courtesy of Southwest Research Institute.

“These patterns cause the algorithms in the camera to either misclassify or mislocate objects, creating a vulnerability,” Garza said. “We call these patterns ‘perception invariant’ adversarial examples because they don’t need to cover the entire object or be parallel to the camera to trick the algorithm. The algorithms can misclassify the object as long as they sense some part of the pattern.” The patterns are designed in such a way that object-detection camera systems see them very specifically. 

“The first step to resolving these exploits is to test the deep-learning algorithms,” Garza said. The team has created a framework capable of repeatedly testing adversarial learning attacks against a variety of deep learning detection programs.

SwRI researchers continue to evaluate how much, or how little, of the pattern is needed to misclassify or mislocate an object. This research will allow the team to test object detection systems and ultimately improve the security of deep learning algorithms.


SwRI engineers are investigating how to thoroughly test object detection systems and improve the security of the deep learning algorithms they use. Courtesy of Southwest Research Institute.

 


Vision Spectra
Summer 2019
GLOSSARY
machine vision
Interpretation of an image of an object or scene through the use of optical noncontact sensing mechanisms for the purpose of obtaining information and/or controlling machines or processes.
Research & TechnologyAmericasSouthwest Research InstituteimagingopticsSensors & Detectorsmachine visiondeep learningcamerasautomotiveindustrialtransportationConsumeradversarial learningobject detection systemsimage detection systemsThe News Wire

Comments
Submit a Feature Article Submit a Press Release
Terms & Conditions Privacy Policy About Us Contact Us
Facebook Twitter Instagram LinkedIn YouTube RSS
©2019 Photonics Media, 100 West St., Pittsfield, MA, 01201 USA, info@photonics.com

Photonics Media, Laurin Publishing
x We deliver – right to your inbox. Subscribe FREE to our newsletters.
We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.