Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics Buyers' Guide Photonics Handbook Photonics Dictionary Newsletters Bookstore
Latest News Latest Products Features All Things Photonics Podcast
Marketplace Supplier Search Product Search Career Center
Webinars Photonics Media Virtual Events Industry Events Calendar
White Papers Videos Contribute an Article Suggest a Webinar Submit a Press Release Subscribe Advertise Become a Member


Adversarial Learning Techniques Test Image Detection Systems

Engineers at Southwest Research Institute (SwRI) are finding and documenting vulnerabilities in machine learning algorithms that can make objects “invisible” to image detection systems that use deep learning.


Many of today’s vehicles use object detection systems to help avoid collisions. SwRI engineers developed unique patterns that can trick these systems into seeing something else, seeing the objects in another location, or not seeing the objects at all. In this photo, the object detection system sees a person rather than a vehicle. Courtesy of Southwest Research Institute.

Deep learning systems reliably detect objects under an array of conditions and, as such, are used in myriad applications and industries, often for safety-critical uses. However, image processing systems that use deep learning algorithms can be deceived through adversarial learning techniques.

To mitigate the risk for compromise in automated image processing systems, research engineers Abe Garza and David Chambers developed adversarial learning patterns for testing the systems. When worn by a person or mounted on a vehicle, the patterns trick object detection cameras into thinking the objects aren’t there, or that they’re something else, or that they’re in another location.


What looks like a colorful pattern to the human eye looks like a bicycle to an object detection system. While deep learning algorithms used in these systems are reliable, they can be deceived with special imagery. SwRI researchers are developing techniques to mitigate the risk of compromise in these systems. Courtesy of Southwest Research Institute.

“These patterns cause the algorithms in the camera to either misclassify or mislocate objects, creating a vulnerability,” Garza said. “We call these patterns ‘perception invariant’ adversarial examples because they don’t need to cover the entire object or be parallel to the camera to trick the algorithm. The algorithms can misclassify the object as long as they sense some part of the pattern.” The patterns are designed in such a way that object-detection camera systems see them very specifically. 

“The first step to resolving these exploits is to test the deep-learning algorithms,” Garza said. The team has created a framework capable of repeatedly testing adversarial learning attacks against a variety of deep learning detection programs.

SwRI researchers continue to evaluate how much, or how little, of the pattern is needed to misclassify or mislocate an object. This research will allow the team to test object detection systems and ultimately improve the security of deep learning algorithms.


SwRI engineers are investigating how to thoroughly test object detection systems and improve the security of the deep learning algorithms they use. Courtesy of Southwest Research Institute.

 



Explore related content from Photonics Media




LATEST NEWS

Terms & Conditions Privacy Policy About Us Contact Us

©2024 Photonics Media