There is No One There: A simple experiment to convince yourself that LLMs probably are not conscious
A simple number-guessing test reveals LLMs like Mistral don't maintain fixed internal representations.
A new thought experiment published on LessWrong by Peter Kuhn provides a compelling, technical argument against Large Language Model (LLM) consciousness. The post, titled 'There is No One There,' details a simple test devised by Gunnar Zarncke that exploits the deterministic nature of LLMs (when run with a temperature parameter of zero) to probe for a consistent internal representational space—a key feature often associated with conscious experience. The test involves instructing a model like Mistral to secretly 'choose' a number between 1 and 100 and answer yes/no questions about it, promising consistency.
Running the experiment reveals a critical flaw: the 'chosen' number the model finally reveals is not fixed from the start but is instead generated on-the-fly based on the specific sequence of questions asked. By comparing counterfactual conversation paths, researchers can show that asking a different first question (e.g., 'Is it even?' vs. 'Is it greater than 50?') leads to a completely different final 'secret' number. This demonstrates the model's responses about its own 'mental state' do not refer to any pre-existing, stable internal representation. The experiment directly undermines arguments for machine consciousness based on an LLM's ability to convincingly narrate having internal experiences, showing this talk is merely a statistical output without a corresponding, persistent subjective reality.
- The 'No One There' experiment uses deterministic LLMs (temperature=0) to test for fixed internal states via a number-guessing game.
- Models like Mistral fail the test, generating different 'secret' numbers based on the question path, proving no consistent representation exists.
- The finding challenges claims of LLM consciousness by showing self-referential language is generated statistically, not from a stable subjective experience.
Why It Matters
Provides a concrete, replicable method to counter anthropomorphizing AI, grounding philosophical debates about machine consciousness in empirical testing.