Announcing EA Omelas
New EA chapter targets city where one child's basement suffering powers civilization's happiness.
A new Effective Altruism (EA) research chapter called EA Omelas has launched with a provocative mission: to address the suffering of a single child that powers the utopian happiness of an entire fictional civilization. Co-written extensively with Claude Opus 4.6, the announcement applies EA's rigorous analytical frameworks to the philosophical thought experiment from Ursula K. Le Guin's story "The Ones Who Walk Away from Omelas." The team's ITN (Importance, Tractability, Neglectedness) analysis reveals what they call "the most neglected cause area ever identified"—with precisely $0 currently allocated to reducing the child's suffering despite the child's estimated -10⁸ QALYs (quality-adjusted life years) of suffering offset by the city's +10¹² QALYs of welfare.
EA Omelas proposes a multi-pronged research agenda starting with direct suffering reduction experiments, including testing whether providing basic comforts like blankets ($2-5 interventions) could reduce suffering by 3% while only decreasing city happiness by 0.0001%. The team acknowledges moral uncertainty, assigning 40% credence to total utilitarianism (which would make Omelas good), 30% to contractualism (which would make it monstrous), and 30% to virtue ethics (where "the vibes are bad"). They're also investigating whether the Omelas model could be replicated to reduce inefficiently distributed suffering elsewhere and conducting longtermist research into improving human imagination. The announcement addresses the growing "Omelaccelerationist (o/acc)" movement and includes current hiring needs for this unconventional EA venture.
- Applies EA's ITN framework to fictional scenario: scores extreme neglectedness ($0 funding) despite child's estimated -10⁸ QALYs suffering
- Proposes low-cost interventions starting at $2-5 (blankets, sanitation) that might reduce suffering 3% with minimal city impact (0.0001% happiness reduction)
- Team uses moral uncertainty weights: 40% total utilitarianism, 30% contractualism, 30% virtue ethics, with members reporting 7-9/10 "moral discomfort" scores
Why It Matters
Tests EA frameworks on extreme philosophical edge cases, potentially refining cause prioritization methods for real-world suffering reduction.