Let goodness conquer all that it can defend
A viral LessWrong post challenges the ideal of moral purity, arguing that effective altruism requires ambitious action, not just avoiding harm.
A philosophical essay titled 'Let Goodness Conquer All That It Can Defend' by user habryka has gone viral on the rationalist community forum LessWrong. The post, framed as a response to Eliezer Yudkowsky (co-founder of the Machine Intelligence Research Institute), directly challenges a core tension within effective altruism and AI safety movements: the fear that accumulating power to do good might backfire catastrophically. The author argues that while centralization carries risks, the alternative—'apocalypse entirely unopposed'—is worse.
The essay's central thesis critiques what it calls 'the reification of innocence as the ideal of moral virtue.' It extensively quotes writer Ozy Brennan's observation that goals focused solely on not harming, not needing, and not failing are 'the life goals of dead people.' The author contends that corpses are terrible at achieving positive goals like writing novels, learning linear algebra, or preventing nuclear war. Therefore, a moral framework that values purity and harm-avoidance above all else is ultimately self-defeating. The piece concludes with a rallying cry for ambitious, proactive world-building: 'We are here to build things... To reshape the cosmos in our image.'
- Critiques 'innocence' as a primary moral virtue, calling it a 'life goal of dead people' that prevents positive action.
- Posits a core dilemma: gathering power to oppose catastrophes (like AI risk) attracts problematic allies, but not acting guarantees disaster.
- Argues for a moral framework focused on ambitious creation and world optimization, not just avoidance of harm or blame.
Why It Matters
It challenges a foundational anxiety in effective altruism and tech ethics, pushing for more confident, proactive approaches to global risk.