Close

Search

Search Menu
Photonics Media Photonics Marketplace Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics ProdSpec Photonics Handbook

Princeton AI Tool Will Help Data Set Builders, Users Resolve Image Biases

Facebook Twitter LinkedIn Email
A team of computer scientists at Princeton University’s Visual AI Lab has developed a method to detect biases in sets of images and visual patterns. The method relies on an open-source tool to flag potential and clearly existing biases in images that are used to train AI systems, such as those that enable automated credit services and courtroom sentencing programs.

The tool specifically allows data set creators and users to correct issues of visual underrepresentation or stereotypical portrayals before image collections are used to train computer vision models.

Engineers use large sets of images, or compilations of images, that are collected from online sources to develop computer vision, which allows computers to recognize people, objects, and actions. Data sets are foundational to computer vision, meaning that the images that reflect societal or other stereotypes and biases can severely (and detrimentally) influence computer vision models.

In one data set, REVISE uncovered a potential gender bias in images containing people (red boxes) and the musical instrument organ (blue boxes). Analyzing the distribution of inferred 3D distances between the person and the organ showed that males tended to be featured as actually playing the instrument, whereas females were often merely in the same space as the instrument. Courtesy of the Princeton Researchers.
In one data set, REVISE uncovered a potential gender bias in images containing people (red boxes) and organs (blue boxes). Analyzing the distribution of inferred 3D distances between the person and the organ showed that males tended to be featured as actually playing the instrument, whereas females were often merely in the same space as the instrument. Courtesy of the Princeton researchers.
The new, tool-based method complements a related advance, in which members of the Princeton Visual AI Lab published a comparison of existing methods for preventing biases in computer vision models. The publication included a proposal for a new, more effective approach to bias mitigation, using statistical methods to inspect a data set for potential biases and underrepresentation along object-, geography-, and gender-based dimensions.


The tool, known as REVISE (REvealing VIsual BiaSEs), uses existing image annotations and quantifiable, discernible measurements (such as object count, co-occurrence of objects and people, and country of origin) to reveal patterns that differ from median distributions.

An example that occurred in a tested data set involved REVISE showing that images including people and flowers differed between males and females. Males more often appeared with flowers in ceremonial and meeting settings; females more often appeared in staged settings and paintings.

“Data set collection practices in computer science haven’t been scrutinized that thoroughly until recently,” study coauthor Angelina Wang said. Images are often scraped from the internet, and it is not always common knowledge that certain images are components of data sets, she said.

Scientists from the Princeton Visual AI Lab presented the approach, itself based on earlier work describing filtering and balancing a data set’s images in a way that required user direction, at the European Conference on Computer Vision this summer.

The work was supported in part by the U.S. National Science Foundation (NSF), Google Cloud, and a Yang Family Innovation Research Grant awarded by the Princeton School of Engineering and Applied Science.

Vision-Spectra.com
Oct 2020
GLOSSARY
bias
1. To influence to a single direction. 2. Voltage that is applied to a solid-state device.
computer visionAIAmericasPrincetonResearch & Technology3D visionSoftwareNSFbias

Submit a Feature Article Submit a Press Release
Terms & Conditions Privacy Policy About Us Contact Us
Facebook Twitter Instagram LinkedIn YouTube RSS
©2023 Photonics Media, 100 West St., Pittsfield, MA, 01201 USA, [email protected]

Photonics Media, Laurin Publishing
x We deliver – right to your inbox. Subscribe FREE to our newsletters.
We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.