Opinion dynamics and mutual influence with LLM agents through dialog simulation
AI agents are now debating and shifting opinions just like humans do.
Researchers have developed a novel framework that uses large language model (LLM) agents in structured, multi-round dialogs to simulate human opinion dynamics. The system updates each agent's dialog history with its own and others' stated opinions, mimicking classical models like DeGroot. By retaining initial opinions, it also simulates anchoring effects seen in the Friedkin-Johnsen model. This bridges classical theory and modern AI, offering a scalable tool for studying opinion formation where real-world data is scarce.
Why It Matters
This could revolutionize social science research, political forecasting, and understanding how misinformation spreads online.