Third Symposium on AIT & ML: AI Safety Applications
AIXI and algorithmic information theory meet safety at Oxford this July.
Cole Wyeth and Aram Ebtekar are organizing the Third Symposium on Machine Learning and Algorithmic Information Theory, taking place July 27-29 at Oxford. This iteration specifically emphasizes applications of algorithmic information theory (AIT) to AI safety, building on prior work by MIRI and agent foundations researchers using AIXI to model risks from artificial superintelligence (ASI).
The symposium invites academics working on AIT, machine learning, or related fields, with particular interest in robust RL/ML, imprecise probability, and Infra-Bayesianism. Attendees can apply via the interest form, and researchers may apply to give talks. The event aims to bridge theoretical frameworks like AIXI with practical safety mitigations, as highlighted by researchers like Michael K. Cohen.
- Symposium runs July 27-29 at Oxford, focused on AI safety applications of algorithmic information theory (AIT).
- Key topics include AIXI models of ASI risk, robust RL/ML, imprecise probability, and Infra-Bayesianism.
- Researchers can apply to attend or give talks via the linked interest form; prior work by MIRI and Michael K. Cohen informs the agenda.
Why It Matters
Bridges theoretical AI safety frameworks like AIXI with practical mitigations, advancing rigorous approaches to superintelligence risk.