SafeBench competition background art

SafeBench

$250,000 in prizes for ML Safety benchmarks

Submit Benchmark

Receive updates about SafeBench:

Thank you! Your submission has been received!
Something went wrong while submitting the form, please try again.

1. View example ideas

Benchmarks should relate to Robustness, Monitoring, Alignment, or Safety Applications. You have the option to develop your benchmark based on one of the example directions we've provided.

2. Develop a benchmark

Understand what we're looking for in a benchmark—an empirical measure that assesses the safety of an AI model, and not its capabilities.

SafeBench Submission timeline

3. Submission Timeline

Submit your benchmark. By default, we will require the code and dataset to be publicly available on Github.

Mar 25, 2024: Competition Launch

The competition begins - we will begin receiving submissions from this date. This includes benchmarks you started working on prior to this date, as long as the paper was published after this date.

Feb 25, 2025: Submission Deadline

Submit your ML safety benchmark by this date.

Apr 25, 2025: Winners Announced

The judges will announce the winners, along with whether they win a $50k or $20k prize.

4. Winners Announced

We will announce the winners, along with the amount of prize money that each winner will receive. There will be three prizes worth $50,000 and five prizes worth $20,000.

SafeBench clock

SafeBench Timeline

Below, you can find relevant dates for the competition.

Mar 25, 2024: Competition Launch

The competition begins - we will begin receiving submissions from this date. This includes benchmarks you started working on prior to this date, as long as the paper was published after this date.

Feb 25, 2024: Submission Deadline

Submit your ML safety benchmark by this date.

Mar 25, 2024: Winners Announced

The judges will announce the winners, along with whether they win a $50k or $20k prize.

Meet the SafeBench Judges

Our judges have extensive experience in AI safety research, across academia, not-for-profit, and industry.

Zico Kolter

Associate Professor, Carnegie Mellon

Zico is an Associate Professor in the Computer Science Department at Carnegie Mellon University. In addition to his full-time role at CMU, he also serves as Chief Scientist of AI Research for the Bosch Center for AI (BCAI), working in the Pittsburgh Office.

Learn more

Mark Greaves

Executive Director, AI2050

Mark Greaves is the Executive Director, AI2050. An initiative of Schmidt Sciences, AI2050 supports exceptional people working on key opportunities and hard problems that are critical to get right for society to benefit from AI.

Learn more

Bo Li

Associate Professor, University of Chicago

Bo Li is an Associate Professor in the Computer Science Department at the University of Chicago and serves on the advisory board of the Center for Artificial Intelligence Innovation (CAII) at Illinois, as well as being a member of the Information Trust Institute (ITI).

Learn more

Dan Hendrycks

Director, Center for AI Safety

Dan is the Director of the Center for AI Safety. He helped contribute to the GELU activation function, created the MMLU benchmark, and many others. He received his PhD from UC Berkeley, where he was advised by Dawn Song and Jacob Steinhardt.

Learn more
This project is supported by Schmidt Sciences, and is established by the Center for AI Safety. Any questions can be directed to safebench@mlsafety.org.