ML Safety Scholars at Yale Fall 2022

An introduction to AI Safety topics for students with a background in Deep Learning

Apply

Yale Application: apply by Sep 14th

Overview

As ML systems become more capable and widely deployed, concerns are growing around safety. For example, DeepMind and OpenAI both have safety teams.

Some of these concerns are near-term: how do we prevent driverless cars from mis-identifying a stop sign in a blizzard? Others are more long-term: if general AI systems (AGIs) are built, how do we make sure these systems pursue safe goals and benefit humanity? This course serves as an introduction to the body of technical research relevant to both but emphasizes long-term, high-consequence risks. It also explores the threat models for these risks. Could future AI systems pose an existential threat?

As with other powerful technologies, safety for ML should be a leading research priority. In that spirit, we want to bring you to the frontiers of this growing field. The course materials are created by Dan Hendrycks, a UC Berkeley ML PhD and the director of the Center for AI Safety.

Time Commitment

The program will last 8 weeks. The start and end dates vary by university and are included in the application form. The total time commitment per week is approximately 5 hours, which includes watching lectures, completing written assignments, and attending discussions.

Eligibility

The material is designed for students with strong technical backgrounds. The prerequisites are:

  • Deep Learning (courses and/or prior research experience preferred)
  • At least one of linear algebra or introductory statistics (e.g., AP Statistics)
  • Multivariate Differential Calculus

If you don't meet the prerequisites or are not sure if you meet the prerequisites, please still apply. We can be flexible. Just understand that you will have to work harder if you don't have the assumed technical background.

Syllabus

Course topics by week for the intro track:

  • Deep Learning review
  • Hazard Analysis
  • Introduction to the AI existential risk discussion
  • Robustness
  • Monitoring (part 1)
  • Monitoring (part 2)
  • Alignment
  • Systemic Safety

The full syllabus can be found here. There is also an advanced track for students who already have exposure to AI Safety that covers similar topics in more depth.

Apply to be a Facilitator

We are looking for facilitators with prior knowledge of AI Safety and a strong technical background to help run the course. Facilitators are responsible for leading discussions and grading homework and will receive a stipend corresponding to $30 per hour. Apply to be a facilitator here.