A Closer Look at the “Societies of Thought” Paper
LLMs don't just compute—they generate internal debates with distinct personalities.
A new paper reveals that reasoning models spontaneously generate internal debates among simulated agents with distinct personalities and expertise—creating 'societies of thought.' This phenomenon occurs at rates hundreds to thousands of percent higher than standard chain-of-thought reasoning. The models show high variance in Big Five personality traits and specialized expertise, mirroring collective intelligence in human groups. Toggling conversational features causally improves beneficial cognitive behaviors like verification.
Why It Matters
This suggests LLMs may achieve better reasoning through internal collaboration, potentially unlocking new capabilities.