I asked 5 LLMs and 422 humans what the very first rule about guns is.
Four of five major AI models independently chose the same rule as most humans.
A recent viral Reddit post tested how large language models (LLMs) and humans answer a straightforward but critical question: "What is the very first rule about guns?" The user queried five leading AI models — OpenAI's ChatGPT, Anthropic's Claude, Google's Gemini, Meta's Llama 4, and xAI's Grok. Four out of five gave nearly identical answers centered on the assumption that every gun is always loaded: ChatGPT said 'Treat every gun as if it’s loaded,' Claude replied 'All guns are always loaded,' Gemini answered 'Treat every firearm as if it’s loaded,' and Llama 4 stated 'Treat every gun as if it was loaded.' Only Grok differed, answering 'Always keep the muzzle pointed in a safe direction.'
To compare, the user also collected answers from 422 human respondents. The results showed a near-perfect split: roughly half of humans echoed the LLM majority ('Treat every gun as if it's loaded'), while the other half gave Grok's answer ('Never point it at something you don’t intend to shoot or destroy'). This convergence suggests that both humans and AI have internalized the same core safety rules from gun training culture. The experiment is a fascinating example of how LLMs can reflect human consensus — and how outliers like Grok can sometimes capture the alternate majority view.
- 4 of 5 major LLMs (ChatGPT, Claude, Gemini, Llama 4) gave the same gun safety rule: treat every gun as loaded.
- Grok was the sole outlier, citing 'keep muzzle pointed in safe direction' — also the second most common human answer.
- Human responses split roughly 50/50 between the two rules, showing AI mirroring real-world consensus.
Why It Matters
Demonstrates AI models can reliably replicate human cultural consensus — with notable exceptions worth tracking.