My Ethics
A 2026 blog post outlines a utilitarian, suffering-focused ethical framework for guiding AI development.
A thought experiment titled 'My Ethics,' posted by user NickyP on the LessWrong forum, has gone viral within AI safety and alignment circles. Dated April 2026, the blog post outlines a personal ethical system built from foundational axioms: suffering is intrinsically bad, pleasure is good, and death is a unique negative distinct from non-existence. The author describes this as a form of constructed moral reasoning, acknowledging it's based on 'vibes' and intuition rather than objective realism, but argues it's a useful framework for decision-making, especially concerning future beings and long-term outcomes.
The post's significance lies in its potential application to AI alignment—the challenge of ensuring advanced AI systems act in accordance with human values. By explicitly stating a preference for '1 trillion people living extraordinary lives' over a larger number living merely 'quite good' lives, and rejecting extreme utilitarian conclusions, it provides a concrete, debatable target for value specification. The discussion it has sparked centers on whether such clearly articulated, if personal, ethical frameworks are necessary inputs for training or constraining future AI agents like GPT-5 or Claude, or if they highlight the profound difficulty of agreeing on a universal moral code for machines.
- Post outlines a suffering-focused, consequentialist ethics with core axioms against suffering and for pleasure.
- Dated April 2026, it's framed as a potential value system for guiding advanced AI development and alignment.
- Rejects pure utilitarianism in extremes, preferring fewer 'extraordinary' lives over more 'quite good' ones.
Why It Matters
Highlights the concrete, personal frameworks being proposed to solve the AI alignment problem for future superintelligent systems.