God Mode is Boring: Musings on Interestingness
A viral LessWrong post deconstructs the 'Repugnant Conclusion' by separating utility from narrative variance.
A viral essay by Alex Steiner titled 'God Mode is Boring: Musings on Interestingness' is sparking debate in AI and philosophy circles by challenging a core assumption about human values. The post directly engages with philosopher Derek Parfit's famous 'Repugnant Conclusion'—a thought experiment where utilitarian logic seems to prefer a vast population with lives barely worth living over a smaller, supremely happy one. Steiner argues that the true source of the conclusion's repugnance isn't the low average utility, but the crushing monotony of 'Muzak and potatoes.' He claims Parfit's example illegitimately bundles two distinct concepts: utility and interestingness.
To separate these concepts, Steiner constructs a 2x2 matrix of hypothetical worlds, contrasting high/low utility with high/low interestingness. He contrasts a world of blissful but static meditators ('The Pod') with a dynamic, galaxy-spanning civilization full of drama and suffering ('Galactic Westeros'). His central claim is that for a future shaped by powerful AI to be truly desirable, it must preserve 'interestingness'—variance, narrative, growth, and challenge—rather than optimizing solely for a uniform, maximal pleasure signal. The essay suggests that aligning AI with complex human values requires moving beyond simple utility functions to capture this elusive preference for a life worth telling stories about.
- Critiques Parfit's 'Repugnant Conclusion' by separating low utility from low 'interestingness' (monotony).
- Proposes a 2x2 framework of worlds based on utility and interestingness, like 'Galactic Westeros'.
- Argues AI alignment must value narrative variance and growth, not just maximal uniform pleasure.
Why It Matters
Forces a rethink of AI value alignment, arguing for complex human preferences like narrative and growth over simple utility maximization.