AI Safety

MAISU 2026 - Minimal AI Safety Unconference (April 24-27, online)

A free, participant-driven online event where anyone can host sessions on preventing AI catastrophe.

Deep Dive

The AI Safety Camp, organized by Robert Kralisch and Remmelt Ellen, is launching MAISU 2026—a Minimal AI Safety Unconference running online from April 24-27. This free event adopts an 'unconference' format, meaning there is no fixed agenda set by organizers. Instead, the schedule is entirely open for any registered participant to populate. Anyone interested in AI safety can propose and host a session on a relevant topic, whether it's a formal talk, a workshop, a debate on a 'hot take,' or another interactive format. The core idea is to lower barriers to participation and encourage a wide range of voices and perspectives on preventing potential AI catastrophes.

Key programming includes dedicated lightning talk sessions where teams from the 11th edition of the AI Safety Camp will present their completed projects. These projects span critical areas like AI governance, advocacy, mechanistic interpretability (understanding how models work internally), and agent foundations (studying AI systems that can take actions). Most sessions are concentrated on April 25-26, with an opening on April 24 and potential overflow into April 27. Participation is flexible; attendees can join as many or as few sessions as they like. Coordination and discussion will happen via the AI Alignment Slack, and hosts are responsible for providing their own video call links, though an AISC Zoom room will be available.

Key Points
  • Free, online 'unconference' from April 24-27 with an open schedule anyone can edit to host sessions.
  • Features lightning talks from the 11th AI Safety Camp teams on projects in governance, interpretability, and agent foundations.
  • Requires no expertise to attend; aims to include diverse backgrounds to collaborate on AI risk mitigation strategies.

Why It Matters

Democratizes AI safety discourse, enabling grassroots collaboration and knowledge-sharing to address existential technological risks.