ML Safety
The emerging research community focused on
reducing long-term risks from ML systems.
What is ML Safety?
ML systems are rapidly increasing in size, are acquiring new capabilities, and are increasingly deployed in high-stakes settings. As with other powerful technologies, the safety of ML systems should be a leading research priority. This involves ensuring systems can withstand hazards (Robustness), identifying hazards (Monitoring), reducing inherent ML system hazards (Alignment), and reducing systemic hazards (Systemic Safety). Example problems and subtopics in these categories are listed below:
  • Robustness: Adversaries, Long Tails
  • Monitoring: Anomalies, Interpretable Uncertainty, Transparency, Trojans, Emergent Behavior
  • Alignment: Honesty, Power Aversion, Value Learning, Machine Ethics
  • Systemic Safety: ML for Improved Epistemics, ML for Improved Cyberdefense, Cooperative AI
Learn more
Get Connected
Stay in the loop and exchange thoughts and news related to ML safety. Tag @ml_safety on Twitter if you want to disseminate your work to the rest of the community.
Follow
ML Safety @ml_safety
Research highlights and announcements
Follow
ML Safety Daily @topofmlsafety
ML safety papers as they are released