Stopping AI is easier than Regulating it.
David Krueger's viral essay claims an international treaty to dismantle compute supply chains is the most effective path.
In a provocative essay going viral on LessWrong, AI researcher David Krueger makes the counterintuitive argument that completely halting advanced AI development is a more practical and effective risk mitigation strategy than attempting to regulate how AI is built and used. Krueger, formerly known as 'capybaralet,' specifically proposes an international treaty aimed at 'Systematically Dismantling the AI Compute Supply Chain.' He sets aside political feasibility to argue that, on technical and incentive grounds, a total stop is the clearest way to reduce existential risks to an acceptable level.
The central challenge, according to Krueger, is international competition—primarily between the US and China—which drives a dangerous race. He asserts that any regulatory framework allowing continued development is doomed by insurmountable monitoring problems. How can anyone verify that a significant fraction of the world's computing power isn't being used for unauthorized, risky AI projects? Tracking every advanced chip, preventing secret factories, and managing incidents like 'missing' shipments would create constant tension and a high risk of enforcement failures that could escalate into conflict.
Krueger contrasts this with the relative simplicity of a stop. If the goal of safety testing is to prevent the deployment of unsafe systems, he questions why proponents are so confident these tests will be passed. He concludes that governance models attempting to control a technology that could grant a single entity vast power are inherently unstable. The essay challenges the mainstream AI policy discourse, suggesting that the most direct path to safety is to turn off the tap at the source—the hardware supply chain—rather than trying to manage the flow.
- Proposes an international treaty to 'Systematically Dismantle the AI Compute Supply Chain' as the most effective safety measure.
- Argues monitoring compliance for regulated development is nearly impossible due to untrackable chips and secret factories.
- Identifies US-China competition as the key barrier to cooperation, making a clean stop the only stable solution.
Why It Matters
Challenges core assumptions in AI policy, advocating for a hardware-focused, prohibitionist approach over complex governance of software and use.