AI Safety Events Tracker

Discover and engage with events related to AI safety online or worldwide – workshops, conferences, reading groups, and more.

Upcoming events

Open calls

What is AI safety?

AI safety involves steering the development of AI systems to prevent potential catastrophes. Researchers address risks like misuse, accidents, rushed deployment, and loss of control that could lead to devastating outcomes. By developing technical solutions, policies, and best practices, AI safety aims to maximize the benefits of AI while avoiding global catastrophic and existential disasters. The goal is the safe and beneficial development of artificial intelligence. For further information: AI safety wikipedia page, existential risk from AI wikipedia page, Center for AI Safety's AI risk explainer.

Partners

Alignment Ecosystem Development
Horizon Omega

© Horizon Omega, released under CC-BY.