Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics Buyers' Guide Photonics Handbook Photonics Dictionary Newsletters Bookstore
Latest News Latest Products Features All Things Photonics Podcast
Marketplace Supplier Search Product Search Career Center
Webinars Photonics Media Virtual Events Industry Events Calendar
White Papers Videos Contribute an Article Suggest a Webinar Submit a Press Release Subscribe Advertise Become a Member


Focal Point with Cognex’s Reto Wyss

RETO WYSS, COGNEX

What will deep learning mean to machine vision and automation?

For four decades, tool-based vision has dominated the scene; people developed algorithms and solutions from the pixel up to solve a particular problem. It has been a very valuable and reliable approach, but there has always been a class of problems where traditional rule-based strategies were insufficient. As a result, humans had to perform the tasks that this approach could not. It’s really about the difference between quantitative measurements and qualitative analysis. Rule-based traditional measurements excel at the former, and on the flip side, humans have proven to be very good at the qualitative type. When we look at the machine vision market, there have always been these two types of applications, so I don’t see this as replacing rule-based machine vision, but rather filling a gap as a complementary technology. Some applications that might have been addressed with a great deal of effort using traditional machine vision can now also be done in a much easier way using deep learning techniques.

What is most important for users to know about deep learning?

The most important thing to understand is that it’s not a magic bullet. For it to work, it is critical to have good data to begin with so you can learn your model properly. If you look at traditional machine vision and the effort that somebody would put into a solution, it probably breaks down to 80% of the effort in developing and providing that solution and 20% testing and verifying it. With deep learning, it’s the opposite. You spend 20% on your feasibility and proof of concept, that it has the potential to work, and then you spend 80% on actually validating it. In the past in machine vision, the person doing that work was a vision engineer, but now in deep learning, the task is shifting much more toward data science.

What’s been the biggest surprise in deploying deep learning?

For a particular application where you’d deploy deep learning, you find that you run into very similar problems to those of human inspection. You are now trying to solve more qualitative types of challenges, and the correct result is not necessarily a black or white or cut-and- dry determination, but more of a gray area requiring judgment where the decision isn’t as clear. These uncertainties arise because the people who are labeling these images actually do not agree among themselves on what is an acceptable defect and what is not, and that uncertainty migrates downstream, and people are really not prepared for that. They are expecting a certain, clear-cut determination since this is a machine, a computer. They neglect the fact that the actual task at hand is already pretty unsure, sometimes pretty ill-defined. It’s a lot like humans.



Explore related content from Photonics Media




LATEST NEWS

Terms & Conditions Privacy Policy About Us Contact Us

©2024 Photonics Media