Search
Menu

Is It a Wolf or a Husky? Can You Trust Your AI Vision Model?

Jul 20, 2021
Facebook X LinkedIn Email
TO VIEW THIS WEBINAR:
Login  Register
About This Webinar
Artificial intelligence is ubiquitous in computer vision applications - from self-driving cars to retail, healthcare, manufacturing, and beyond. How do you know your computer vision AI model is working as advertised? Ted Way covers various ways that AI models can be fooled, such as by adding stickers to a stop sign. He focuses on using techniques to probe how a vision AI model makes a decision. Finally, Way shows how the LIME (local interpretable model-agnostic explanations) technique explains what an AI model is doing when it's looking at pictures of wolves and huskies, for example.

***This presentation premiered during the 2021 Vision Spectra Conference. For more information on Photonics Media conferences, visit events.photonics.com.

About the presenter:
Ted WayTed Way, Ph.D., is a program manager lead for the Microsoft Insights Apps AI team. He is passionate about telling the story of how AI will empower people and organizations to achieve more. He has bachelor's degrees in electrical engineering and computer engineering, master's degrees in electrical engineering and biomedical engineering, and a doctorate in biomedical engineering from the University of Michigan in Ann Arbor. He wrote his doctoral dissertation on "spell check for radiologists," a computer-aided diagnosis (CAD) system that uses image processing and machine learning to predict lung cancer malignancy on chest CT scans.
artificial intelligenceVision Spectramachine vision
We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.