Overstating Attitudes, Ignoring Networks: LLM Biases in Simulating Misinformation Susceptibility
New research reveals AI models like GPT-4 and Claude 3.5 systematically distort how people process misinformation.
A new study from researchers at USC and UCLA reveals systematic biases in how large language models simulate human susceptibility to misinformation. When prompted with detailed participant profiles from real survey data, models like GPT-4 and Claude 3.5 generated responses that captured broad distributional patterns but fundamentally misrepresented key relationships. Most notably, the models overestimated the connection between believing misinformation and sharing it by approximately 40%, creating an exaggerated picture of how misinformation spreads.
The research, accepted to ICWSM 2026, found that linear models trained on LLM-generated responses showed substantially higher explained variance (R²) than those trained on human data. The AI models placed disproportionate weight on attitudinal and behavioral features while largely ignoring personal network characteristics—a critical factor in real-world misinformation dynamics. Analysis of the models' reasoning and training data suggests these distortions reflect systematic biases in how misinformation-related concepts are represented within LLMs.
This work has significant implications for computational social science, where researchers increasingly use LLMs as proxies for human subjects. The findings suggest LLM-based simulations are better suited for diagnosing systematic divergences from human judgment than for substituting it entirely. For professionals using AI to model social phenomena, this research serves as a crucial warning about the limitations of current models in capturing complex, network-driven human behaviors.
- LLMs overstate the belief-sharing link in misinformation by ~40% compared to human survey data
- Models ignore social network factors while overweighting attitudes, creating distorted predictive models
- LLM-generated responses show higher explained variance (R²) but poorer real-world accuracy
Why It Matters
Researchers using AI for social science must account for these systematic biases or risk flawed conclusions about human behavior.