Close

Search

Search Menu
Photonics Media Photonics Buyers' Guide Photonics Spectra BioPhotonics EuroPhotonics Vision Spectra Photonics Showcase Photonics ProdSpec Photonics Handbook
More News

Focal Point with Cognex’s Reto Wyss

Facebook Twitter LinkedIn Email Comments
RETO WYSS, COGNEX

RETO WYSS, CognexWhat will deep learning mean to machine vision and automation?

For four decades, tool-based vision has dominated the scene; people developed algorithms and solutions from the pixel up to solve a particular problem. It has been a very valuable and reliable approach, but there has always been a class of problems where traditional rule-based strategies were insufficient. As a result, humans had to perform the tasks that this approach could not. It’s really about the difference between quantitative measurements and qualitative analysis. Rule-based traditional measurements excel at the former, and on the flip side, humans have proven to be very good at the qualitative type. When we look at the machine vision market, there have always been these two types of applications, so I don’t see this as replacing rule-based machine vision, but rather filling a gap as a complementary technology. Some applications that might have been addressed with a great deal of effort using traditional machine vision can now also be done in a much easier way using deep learning techniques.

What is most important for users to know about deep learning?

The most important thing to understand is that it’s not a magic bullet. For it to work, it is critical to have good data to begin with so you can learn your model properly. If you look at traditional machine vision and the effort that somebody would put into a solution, it probably breaks down to 80% of the effort in developing and providing that solution and 20% testing and verifying it. With deep learning, it’s the opposite. You spend 20% on your feasibility and proof of concept, that it has the potential to work, and then you spend 80% on actually validating it. In the past in machine vision, the person doing that work was a vision engineer, but now in deep learning, the task is shifting much more toward data science.

What’s been the biggest surprise in deploying deep learning?

For a particular application where you’d deploy deep learning, you find that you run into very similar problems to those of human inspection. You are now trying to solve more qualitative types of challenges, and the correct result is not necessarily a black or white or cut-and- dry determination, but more of a gray area requiring judgment where the decision isn’t as clear. These uncertainties arise because the people who are labeling these images actually do not agree among themselves on what is an acceptable defect and what is not, and that uncertainty migrates downstream, and people are really not prepared for that. They are expecting a certain, clear-cut determination since this is a machine, a computer. They neglect the fact that the actual task at hand is already pretty unsure, sometimes pretty ill-defined. It’s a lot like humans.

  • Reto Wyss is senior director of AI technology at Cognex Corp. In 2012, he co-founded ViDi Systems SA to commercialize its deep learning-based inspection technology. The startup was acquired in 2017 by Cognex.

Vision Spectra
Jun 2019
GLOSSARY
focal point
That point on the optical axis of a lens, to which an incident bundle of parallel light rays will converge.
Focal Point

Comments
Submit a Feature Article Submit a Press Release
Terms & Conditions Privacy Policy About Us Contact Us
Facebook Twitter Instagram LinkedIn YouTube RSS
©2019 Photonics Media, 100 West St., Pittsfield, MA, 01201 USA, info@photonics.com

Photonics Media, Laurin Publishing
x We deliver – right to your inbox. Subscribe FREE to our newsletters.
We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.