AI Safety

Storing Food

AI safety researcher calculates 0.002% annual famine risk justifies $9/year food storage cost using expected value analysis.

Deep Dive

AI safety researcher Jefftk published a viral post on LessWrong advocating for storing substantial food reserves as rational risk mitigation against potential famine scenarios. The post presents a detailed cost-benefit analysis showing that storing 3 months of food costs $180 upfront with an annual opportunity cost of just $9 (assuming 5% real returns). Using expected value calculations, Jefftk argues this investment makes sense even with extremely low probability events - specifically, a 0.002% annual chance of experiencing a 3-month famine, valuing human life at $10M in the calculation.

The technical approach involves buying extra non-perishables that households already consume (like pasta and beans at $2/person-day) and rotating through them. Jefftk notes that 1 pound of pasta plus a can of beans provides approximately one person-day of calories and protein. Beyond catastrophic risk reduction, the strategy offers practical benefits: reduced grocery trips, better sale utilization, and cooking flexibility.

Contextually, this comes from an AI safety researcher whose day job involves reducing rare-but-catastrophic risks, drawing parallels between AI risk mitigation and practical preparedness. The post has gained traction in tech circles where expected value thinking and rational risk assessment are common frameworks. It represents a bridge between abstract risk modeling and concrete personal action, sparking discussions about how professionals in high-stakes fields should approach personal preparedness alongside their work on global catastrophic risks.

Key Points
  • $180 upfront ($9/year) stores 3 months of food using pasta/beans at $2/person-day
  • Expected value analysis shows worth it with just 0.002% annual famine risk, valuing life at $10M
  • Approach involves rotating through non-perishables already consumed, with space as main constraint

Why It Matters

Applies AI safety risk frameworks to personal preparedness, showing how expected value thinking guides practical decisions beyond theoretical models.