"What Exactly Would An International AI Treaty Say?" Is a Bad Objection
Davidmanheim compares AI treaty negotiations to nuclear arms control, showing how shared goals matter more than exact rules.
In a viral LessWrong post, AI safety researcher Davidmanheim tackles the common objection that international AI treaty negotiations are stalled because 'no one knows what the technical rules would be.' He argues this misunderstands how successful international treaties actually work, using two powerful historical analogies.
First, he examines the failed Pandemic Treaty as a negative example—where vague goals and lack of shared vision led to a toothless 'mishmash' of commitments. The critical failure was an absence of clear, agreed-upon benefits for all signatories. In contrast, he points to the suite of nuclear weapons treaties (NPT, SALT, New START) as a positive model. Here, the shared goal of preventing nuclear catastrophe served as a 'north star,' enabling successive agreements on testing, proliferation, and arms control despite immense technical and political complexity.
Davidmanheim's core argument is that for AI, the foundational questions are already answered: experts broadly agree that unaligned, superintelligent AI (ASI) poses a catastrophic global risk, and that verification and enforcement mechanisms are technically feasible. Therefore, the real barrier isn't drafting perfect technical specifications upfront, but achieving the political will to establish a shared commitment to prevent the creation of 'unsafe ASI.' The negotiation process itself, guided by a clear common goal, would iteratively develop the necessary technical and legal contours, just as it did for nuclear arms control.
- The post refutes the common objection that AI treaty talks are stuck on undefined technical rules, arguing shared goals matter more.
- Uses the failed Pandemic Treaty (vague, no shared vision) vs. successful nuclear treaties (clear goal, iterative agreements) as contrasting models.
- Asserts experts already agree on the catastrophic risk of unsafe ASI and the feasibility of verification, making political will the main hurdle.
Why It Matters
Shifts the debate from impossible technical precision to achievable political negotiation, providing a roadmap for global AI governance.