top of page

AI Safety
Fundamentals 

A 12-week program that introduces you to the fundamentals of AI safety.

A collaboration between Tutke and AI Safety North

What?

The AI Safety Fundamentals program is designed to make the space of AI alignment and AI governance more accessible. Since the field is new and there are no textbooks or widespread university courses, we are looking to supplement and fill this gap. We are offering two tracks: technical and governance.

 

In these programs, the participants get to engage with 8 weeks of learning followed by 4 weeks of project work. The participants will also be invited to talks by Siméon Campos (Research Entrepreneur at SaferAI) and Esben Kran (CEO of Apart Research).

 

The course is held simultaneously with 10 groups across the Nordic and Baltic countries. Individuals performing well during the course will get the opportunity to join an AI Safety research retreat after the course.

​

Questions? Send an email to info@tutke.org.

When?

Applications close: Sunday September 10

Program duration: Weekly between September 19 to December 18

Retreat: 3-7 January

Where?

There will be weekly group sessions in Helsinki, Otaniemi (Espoo) and online.

Content

Each week consists of curated readings and an exercise, which take ~4 hours to engage with combined. The weekly discussion session takes 1.5 hours.

​

The technical track will use an adapted version of the curriculum designed by Richard Ngo, restructured in the form of a textbook.

​

The governance track will use the curriculum designed by BlueDot Impact, which you can see here.

  • What is AI safety?
    We believe that the development of advanced artificial intelligence (AI) would be one of the most transformative achievements of this century. However, the current landscape of machine learning (ML) systems presents intricate challenges, challenges that might be magnified as these systems progress to higher levels of capability. AI safety is about ensuring AI systems are ethical, reliable, and aligned with human values. Resolving these problems ahead of time will require a collaborative effort involving researchers, policymakers, and other stakeholders in the decades to come.
  • What background knowledge do I need?
    Technical track: You will need basic knowledge in linear algebra and statistics. No background knowledge in machine learning is required, although it is highly recommended. Governance track: No special background knowledge is needed.
  • Should I apply for the technical or governance track?
    We recommend choosing the track that seems more suited to your interests and background. The technical track will go deeper into the technical problem of AI alignment and proposed solutions to it, while the governance track will explore how risks from advanced AI can be tackled through standards and regulation. If you have a computer science/math/technical background, we usually recommend applying for the technical track. If you have a humanities/less technical background, or you are a decision-maker concerned about these issues, we recommend applying for the governance track.
  • Can I sign up for both programs?
    We have a limited capacity, so we will only admit a participant on one track at a time. However, you are free to read the resources of the other course on your own, or apply for the other course at a later time.

Frequently Asked Questions

bottom of page