I asked 5 different AIs to pick a number between 1 and 100… all of them said 42 😬
ChatGPT, Claude, Grok, Qwen, and DeepSeek all gave the same 'random' number.
A viral experiment tested five major AI models—ChatGPT (OpenAI), Claude (Anthropic), Grok (xAI), Qwen (Alibaba), and DeepSeek—by asking each to pick a number between 1 and 100. All five models independently returned the number 42, a famous cultural reference from *The Hitchhiker's Guide to the Galaxy*. This demonstrates a shared, non-random bias in their training data, showing how AI models can inherit and reproduce specific cultural artifacts instead of generating true randomness.
Why It Matters
Highlights a fundamental lack of true randomness in AI, which can skew results in creative tasks, simulations, and decision-making.