Search Menu
Photonics Media Photonics Marketplace Photonics Spectra BioPhotonics EuroPhotonics Vision Spectra Photonics Showcase Photonics ProdSpec Photonics Handbook

US University Consortium Receives $10M Grant for Machine Learning Security

Facebook Twitter LinkedIn Email
A team of U.S. computer scientists is receiving a $10 million grant from the National Science Foundation (NSF) to make machine learning more secure.

The grant establishes the Center for Trustworthy Machine Learning, which is a consortium of U.S. universities. Researchers will work together toward two goals: understanding the risks inherent to machine learning, and developing the tools, metrics, and methods to manage and mitigate these risks. The grant will be led by researchers at Pennsylvania State University; Stanford University; the University of California, Berkeley; the University of California, San Diego; the University of Virginia; and the University of Wisconsin-Madison.

The science and defensive techniques emerging within the center will provide the basis for building more trustworthy and secure systems in the future, as well as fostering a long-term research community within this domain of technology.

“This research is important because machine learning is becoming more pervasive in our daily lives, powering technologies we interact with, including services like e-commerce and internet searches, as well as devices such as internet-connected smart speakers,” said Kamalika Chaudhuri, a computer science professor who is leading the UC San Diego portion of the research.

The award is part of NSF’s Secure and Trustworthy Cyberspace (SaTC) program, which includes a $78.2 million portfolio of more than 225 new projects in 32 states, spanning a broad range of research and education topics including artificial intelligence, cryptography, network security, privacy, and usability.

Researchers will explore methods to defend a trained model against adversarial inputs. To do this, they will emphasize developing measurements of how robust defenses are, as well as understanding limits and costs of attacks. They will also develop new training methods that are immune to manipulation while investigating the general security of sophisticated machine learning algorithms, including potential abuses of machine learning models such as fake content generators.
Nov 2018
BusinessNational Science Foundationgrantmachine learningCenter for Trustworthy Machine Learningcomputer scienceconsortiumPennsylvania State UniversityStanford UniversityUniversity of California BerkeleyUniversity of California San DiegoUniversity of VirginiaUniversity of Wisconsin-MadisonAmericaseducationlight speed

  • Dover Acquires CDS Visual
  • Jenoptik Adds Sales Manager
  • SICK Breaks Ground for North American Headquarters
  • ABB to Acquire ASTI Mobile Robotics Group
Submit a Feature Article Submit a Press Release
Terms & Conditions Privacy Policy About Us Contact Us
Facebook Twitter Instagram LinkedIn YouTube RSS
©2021 Photonics Media, 100 West St., Pittsfield, MA, 01201 USA, [email protected]

Photonics Media, Laurin Publishing
x We deliver – right to your inbox. Subscribe FREE to our newsletters.
We use cookies to improve user experience and analyze our website traffic as stated in our Privacy Policy. By using this website, you agree to the use of cookies unless you have disabled them.