ML Safety

The ML research community focused on
reducing risks from AI systems.

What is ML Safety?

ML systems are rapidly increasing in size, are acquiring new capabilities, and are increasingly deployed in high-stakes settings. As with other powerful technologies, the safety of ML systems should be a leading research priority. This involves ensuring systems can withstand hazards (Robustness), identifying hazards (Monitoring), reducing inherent ML system hazards (Alignment), and reducing systemic hazards (Systemic Safety). Example problems and subtopics in these categories are listed below:

Robustness

Adversarial Robustness, Long-Tail Robustness

Monitoring

Anomaly Detection, Interpretable Uncertainty, Transparency, Trojans, Detecting Emergent Behavior

Alignment

Honesty, Power Aversion, Value Learning, Machine Ethics

Systemic Safety

ML for Improved Epistemics, ML for Improved Cyberdefense, Cooperative AI

ML Safety Projects

We organize AI/ML safety resources and education for researchers and non-technical audiences.

Get Connected

Stay in the loop and exchange thoughts and news related to ML safety. Join our slack or follow one of the accounts below.