Photonics Spectra BioPhotonics Vision Spectra Photonics Showcase Photonics Buyers' Guide Photonics Handbook Photonics Dictionary Newsletters Bookstore
Latest News Latest Products Features All Things Photonics Podcast
Marketplace Supplier Search Product Search Career Center
Webinars Photonics Media Virtual Events Industry Events Calendar
White Papers Videos Contribute an Article Suggest a Webinar Submit a Press Release Subscribe Advertise Become a Member


US University Consortium Receives $10M Grant for Machine Learning Security

A team of U.S. computer scientists is receiving a $10 million grant from the National Science Foundation (NSF) to make machine learning more secure.

The grant establishes the Center for Trustworthy Machine Learning, which is a consortium of U.S. universities. Researchers will work together toward two goals: understanding the risks inherent to machine learning, and developing the tools, metrics, and methods to manage and mitigate these risks. The grant will be led by researchers at Pennsylvania State University; Stanford University; the University of California, Berkeley; the University of California, San Diego; the University of Virginia; and the University of Wisconsin-Madison.

The science and defensive techniques emerging within the center will provide the basis for building more trustworthy and secure systems in the future, as well as fostering a long-term research community within this domain of technology.

“This research is important because machine learning is becoming more pervasive in our daily lives, powering technologies we interact with, including services like e-commerce and internet searches, as well as devices such as internet-connected smart speakers,” said Kamalika Chaudhuri, a computer science professor who is leading the UC San Diego portion of the research.

The award is part of NSF’s Secure and Trustworthy Cyberspace (SaTC) program, which includes a $78.2 million portfolio of more than 225 new projects in 32 states, spanning a broad range of research and education topics including artificial intelligence, cryptography, network security, privacy, and usability.

Researchers will explore methods to defend a trained model against adversarial inputs. To do this, they will emphasize developing measurements of how robust defenses are, as well as understanding limits and costs of attacks. They will also develop new training methods that are immune to manipulation while investigating the general security of sophisticated machine learning algorithms, including potential abuses of machine learning models such as fake content generators.

Explore related content from Photonics Media




LATEST NEWS

Terms & Conditions Privacy Policy About Us Contact Us

©2024 Photonics Media