Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics Buyers' Guide Photonics Handbook Photonics Dictionary Newsletters Bookstore
Latest News Latest Products Features All Things Photonics Podcast
Marketplace Supplier Search Product Search Career Center
Webinars Photonics Media Virtual Events Industry Events Calendar
White Papers Videos Contribute an Article Suggest a Webinar Submit a Press Release Subscribe Advertise Become a Member


Princeton AI Tool Will Help Data Set Builders, Users Resolve Image Biases

A team of computer scientists at Princeton University’s Visual AI Lab has developed a method to detect biases in sets of images and visual patterns. The method relies on an open-source tool to flag potential and clearly existing biases in images that are used to train AI systems, such as those that enable automated credit services and courtroom sentencing programs.

The tool specifically allows data set creators and users to correct issues of visual underrepresentation or stereotypical portrayals before image collections are used to train computer vision models.

Engineers use large sets of images, or compilations of images, that are collected from online sources to develop computer vision, which allows computers to recognize people, objects, and actions. Data sets are foundational to computer vision, meaning that the images that reflect societal or other stereotypes and biases can severely (and detrimentally) influence computer vision models.


In one data set, REVISE uncovered a potential gender bias in images containing people (red boxes) and organs (blue boxes). Analyzing the distribution of inferred 3D distances between the person and the organ showed that males tended to be featured as actually playing the instrument, whereas females were often merely in the same space as the instrument. Courtesy of the Princeton researchers.
The new, tool-based method complements a related advance, in which members of the Princeton Visual AI Lab published a comparison of existing methods for preventing biases in computer vision models. The publication included a proposal for a new, more effective approach to bias mitigation, using statistical methods to inspect a data set for potential biases and underrepresentation along object-, geography-, and gender-based dimensions.

The tool, known as REVISE (REvealing VIsual BiaSEs), uses existing image annotations and quantifiable, discernible measurements (such as object count, co-occurrence of objects and people, and country of origin) to reveal patterns that differ from median distributions.

An example that occurred in a tested data set involved REVISE showing that images including people and flowers differed between males and females. Males more often appeared with flowers in ceremonial and meeting settings; females more often appeared in staged settings and paintings.

“Data set collection practices in computer science haven’t been scrutinized that thoroughly until recently,” study coauthor Angelina Wang said. Images are often scraped from the internet, and it is not always common knowledge that certain images are components of data sets, she said.

Scientists from the Princeton Visual AI Lab presented the approach, itself based on earlier work describing filtering and balancing a data set’s images in a way that required user direction, at the European Conference on Computer Vision this summer.

The work was supported in part by the U.S. National Science Foundation (NSF), Google Cloud, and a Yang Family Innovation Research Grant awarded by the Princeton School of Engineering and Applied Science.

Explore related content from Photonics Media




LATEST NEWS

Terms & Conditions Privacy Policy About Us Contact Us

©2024 Photonics Media