AI Doom Markets
Prediction markets now bet on AI apocalypse scenarios, with 'gradual resource monopolization' as the top pick.
Forecaster Ihor Kendiukhov has launched a provocative series of 'AI Doom Markets' on the prediction platform Manifold, creating a speculative arena for betting on humanity's potential demise by artificial intelligence. The most active market asks, 'If AI kills everyone, how will it do it?' with current traders favoring 'Gradual resource monopolization / slow squeeze' as the leading method, followed by the more cinematic 'Engineered pandemic.' Other markets probe which organization might be responsible (with Kendiukhov personally suspecting Anthropic) and the nature of a hypothetical exterminating AI.
These markets extend to forecasting the opinions of AI safety figurehead Eliezer Yudkowsky and the potential for international moratoriums or even military interventions to slow AI development by 2030 or 2035. A key caveat, noted by commenters, is that many markets are illiquid and, since a 'YES' resolution would leave no one to collect, they function more as 'mana-weighted polls' of the AI risk community's anxieties than true financial instruments. The project highlights how prediction markets are being used to quantify and debate long-term, high-stakes technological risks in a tangible, if morbid, way.
- The top-predicted AI extinction method is 'Gradual resource monopolization,' beating out 'Engineered pandemic.'
- Markets speculate on responsible orgs, with creator Ihor Kendiukhov pointing to Anthropic as a primary candidate.
- Includes meta-markets on Eliezer Yudkowsky's confidence levels and potential policy responses like a moratorium by 2033.
Why It Matters
Quantifies and makes tradable the abstract fears driving the AI safety debate, revealing community consensus on risks.