AI Safety Events

Discover and engage with events in the AI safety space – both online and in-person.

Upcoming events

Open calls

What is AI safety?

The AI safety field is dedicated to reducing catastrophic risks posed by the development of AI systems. Researchers work to address risks including misuse, accidents, rushed deployment, and loss of control. By developing technical solutions, policies, and best practices, AI safety aims to maximize the benefits of AI while avoiding disaster, including human extinction. The goal is the safe and beneficial development of artificial intelligence.For further information: AI safety wikipedia page, existential risk from AI wikipedia page, Center for AI Safety's AI risk explainer.


Alignment Ecosystem Development
Horizon Omega

© Horizon Omega, released under CC-BY.