AI Safety

Treaties, Regulations, and Research can be Complements

A new essay challenges the false dichotomy between global treaties and domestic AI regulation.

Deep Dive

Researcher Davidmanheim published a detailed argument on LessWrong challenging the polarized discourse around AI governance. He asserts that framing international treaties and domestic regulation as opposing solutions creates unnecessary conflict and reduces effectiveness. The core of his argument is that different AI risks exist at different levels: prosaic harms like fraud or discrimination are best handled by national laws, while systemic risks from international arms races require treaty-level coordination. He uses David Kreuger's recent statement, "Stopping AI is easier than regulating it," as a case study of this oversimplification, agreeing with the sentiment but critiquing the framing.

Davidmanheim further argues that both regulatory approaches are strengthened by complementary technical and policy research. For regulation to work, it needs operationalized risk definitions, measurable standards, and auditable procedures—all enabled by research. Similarly, effective treaties require shared definitions, credible verification methods, and oversight mechanisms, which also draw from research and can be supported by domestic regulatory frameworks. He draws an analogy to industries like aviation, where safety is managed through a combination of national bodies and international standards, suggesting the AI industry's stated desire for rules should be met with this multi-layered approach rather than a single solution.

Key Points
  • Challenges the false dichotomy between international AI treaties and domestic regulation, arguing they address different risk classes.
  • Posits that prosaic harms (fraud, bias) need national laws, while systemic race dynamics require treaty-level coordination.
  • Argues both regulatory paths are strengthened by complementary technical research in areas like evals and interpretability.

Why It Matters

Provides a pragmatic framework for policymakers to build layered, effective AI governance instead of pursuing single, conflicting solutions.