Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics Buyers' Guide Photonics Handbook Photonics Dictionary Newsletters Bookstore
Latest News Latest Products Features All Things Photonics Podcast
Marketplace Supplier Search Product Search Career Center
Webinars Photonics Media Virtual Events Industry Events Calendar
White Papers Videos Contribute an Article Suggest a Webinar Submit a Press Release Subscribe Advertise Become a Member


Duke Researchers Take Aim at Neural Network Bias

A team of researchers at Duke University is addressing issues of transparency when it comes to the deep learning methods of neural computer vision systems. A technique the team has introduced aims to help to understand potential errors and biases in the “thinking” of deep learning algorithms. The issue, known as the “black box” problem, describes the hidden reasoning within neural networks that is largely unknown, even, in some cases, to designers. Previous attempts to shed light on the thought processes behind such decisions have considered the actions following the learning stage itself, highlighting what the computer was “looking” at rather than its reasoning.

“The problem with deep learning models is they’re so complex that we don’t actually know what they’re learning,” said Zhi Chen, a Ph.D. student in computer science professor Cynthia Rudin’s lab. “They can often leverage information that we don’t want them to. Their reasoning processes can be completely wrong.”

Instead of focusing on what the machine is looking at after the fact, the researchers’ method trains the network to show the processes of its work by demonstrating its understanding along the way, showing which concepts it’s employing to make its decision. Even with the adjustments to the network, it is able to retain the same level of accuracy as the original model, as well as the ability to show how the results are determined.

In the technique, one standard portion of a network is substituted for a new part that constrains a single neuron in the network to fire in response to standard tags and classifications that it uses to make its decision. Using a neural network trained with millions of labeled images, the researchers tested their method by feeding it images it hadn’t seen before. The researchers were able to see a readout of the network’s thought process and the unique tags through which it cycled before making a decision. 

The module can be wired into any neural network trained to decipher images, the researchers said.

The researchers connected the solution to a network designed to recognize skin cancer, which had been trained with thousands of images labeled and marked by oncologists. One of the tags the network read out surprised the researchers, they said: “irregular border.” The system was not programmed with that tag, instead developing it on its own through information it had gathered from its training images.

“Our method revealed a shortcoming in the data set,” Rudin said. “This example just illustrates why we shouldn’t put blind faith in ‘black box’ models with no clue of what goes on inside them, especially for tricky medical diagnoses.”

The research was published in Nature Machine Intelligence (www.doi.org/10.1038/s42256-020-00265-z).

Explore related content from Photonics Media




LATEST NEWS

Terms & Conditions Privacy Policy About Us Contact Us

©2024 Photonics Media