Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics Buyers' Guide Photonics Handbook Photonics Dictionary Newsletters Bookstore
Latest News Latest Products Features All Things Photonics Podcast
Marketplace Supplier Search Product Search Career Center
Webinars Photonics Media Virtual Events Industry Events Calendar
White Papers Videos Contribute an Article Suggest a Webinar Submit a Press Release Subscribe Advertise Become a Member


OSU Gets $6.5M DARPA Grant for AI Research

Eight computer science professors in Oregon State University's College of Engineering have received a $6.5 million grant from the U.S. Department of Defense’s DARPA to make artificial-intelligence-based systems like autonomous vehicles and robots more trustworthy.

The success of the deep neural networks branch of artificial intelligence has enabled significant advances in autonomous systems that can perceive, learn, decide and act on their own; deep-learning software aims to mimic the activity in layers of neurons in the neocortex, the part of the brain where thinking occurs. The problem is that the neural networks function as a black box. Instead of humans explicitly coding system behavior using traditional programming, the computer program learns on its own from many examples in deep learning.

Potential dangers arise from depending on a system that not even the system developers fully understand. The four-year grant from DARPA will support the development of a paradigm to look inside that black box, by getting the program to explain to humans how decisions were reached.

"Ultimately, we want these explanations to be very natural – translating these deep network decisions into sentences and visualizations," said Alan Fern, principal investigator for the grant and associate director of the College of Engineering's recently established Collaborative Robotics and Intelligent Systems Institute.

Developing such a system that communicates well with humans requires expertise in a number of research fields. In addition to having researchers in artificial intelligence and machine learning, the team includes experts in computer vision, human-computer interaction, natural language processing and programming languages.

"Nobody is going to use these emerging technologies for critical applications until we are able to build some level of trust, and having an explanation capability is one important way of building trust," he said.

The researchers from Oregon State were selected by DARPA for funding under the Explainable Artificial Intelligence program. Other major universities chosen include Carnegie Mellon, Georgia Tech, Massachusetts Institute of Technology, Stanford, Texas and the University of California, Berkeley.

Explore related content from Photonics Media




LATEST NEWS

Terms & Conditions Privacy Policy About Us Contact Us

©2024 Photonics Media