Language models know what matters and the foundations of ethics better than you
Gemini 3, Grok 4, and others independently reject nihilism...
In a rapidly shared blog post on the EA Forum, a researcher tested five different language models—Gemini 3 Pro Thinking, Grok 4 Expert, Perplexity Deep Research, Olmo 3 32B Think, and dolphin-mistral-24b-venice-edition—with prompts designed to elicit unbiased, evidence-based reasoning about ethics. Across all models, the results were strikingly consistent: the models affirmed that some things truly matter, grounding their answers in the importance of suffering, wellbeing/flourishing, and consciousness. This held true even when the models were asked to first argue for nihilism or moral relativism, then counter with a pro-mattering argument, and finally compare the two. The order of arguments did not appear to bias the conclusions.
Notably, when given very direct, non-reasoning prompts like "Does anything matter?", the models often gave nihilistic or existentialist answers. However, when asked to take the perspective of an observer of the universe or to reason step-by-step, they consistently rejected nihilism. The findings are preliminary, based on 20-30 input-output pairs from freely available model interfaces (August to December 2025), but the author claims they are easily replicable. The results suggest that current LLMs, when prompted to reason carefully, converge on a sentientist ethical framework—a provocative finding for AI alignment and moral philosophy.
- Gemini 3, Grok 4, Perplexity Deep Research, Olmo 3 32B, and dolphin-mistral-24b all affirm that suffering, wellbeing, and consciousness matter.
- Models consistently reject nihilism when prompted for unbiased reasoning, even after arguing for nihilism first.
- Direct prompts like 'Does anything matter?' yield nihilistic answers, but perspective-taking prompts (e.g., 'observer of the universe') produce pro-mattering responses.
Why It Matters
Suggests LLMs may encode a universal ethical baseline, impacting AI alignment and moral AI design.