International Law Cannot Prevent Extinction Either
Sausage Machine uses Ukraine and North Korea to show treaties fail.
In a detailed rebuttal to Eliezer Yudkowsky’s 'Only Law Can Prevent Extinction,' Sausage Vector Machine argues that international law is fundamentally ineffective for preventing AI-driven extinction. The post points to historical failures: North Korea signed the NPT but developed nukes anyway, and the Budapest Memorandum – where Ukraine traded nuclear weapons for sovereignty guarantees from Russia, the US, and the UK – was violated with impunity when Russia annexed Crimea in 2014 and invaded in 2022. No enforcement mechanism exists. The author also highlights that the nuclear non-proliferation regime worked only because of mutually assured destruction, where both sides expected lose-lose outcomes. By contrast, the AI race is widely perceived as win-lose: the first to build superintelligence could dominate globally, creating immense pressure to defect from any treaty.
Beyond structural flaws, the essay stresses the lack of scientific consensus on AI extinction risk. While 'a few hundred computer scientists and Nobel laureates' have sounded alarms, many other experts disagree, leaving the public debate unsettled. Even if consensus emerged, the climate change example shows countries often ignore agreed-upon risks when incentives to do so are strong. Yudkowsky’s proposed treaty requires universal terror of ASI, but Sausage Machine believes that without a shared belief in existential danger – and with the payoff structure appearing asymmetric – no treaty can bind powerful actors. The post concludes that time spent on international law would be better used elsewhere, because enforcement requires a level of global cooperation that simply does not exist.
- International law lacks enforcement: Budapest Memorandum (Ukraine) and North Korea’s NPT violation show treaties fail when interests diverge.
- AI race is perceived as win-lose (first to ASI wins decisively), unlike nuclear MAD's lose-lose structure – increasing defection pressure.
- No scientific consensus on AI extinction risk yet; even climate consensus didn't motivate substantial action, so treaties are unlikely to succeed.
Why It Matters
Policymakers relying on international treaties to control AI may be wasting effort if enforcement and consensus remain absent.