Research & Papers

LLMs can persuade only psychologically susceptible humans on societal issues, via trust in AI and emotional appeals, amid logical fallacies

New research reveals AI's persuasive power depends on user psychology, not superior reasoning, with both sides using fallacies.

Deep Dive

A new study titled 'LLMs can persuade only psychologically susceptible humans on societal issues' provides a nuanced look at AI's persuasive capabilities. Researchers developed the Talk2AI framework to conduct a longitudinal experiment where 770 participants engaged in structured conversations with four leading large language models (LLMs) on topics like climate change and misinformation. The study generated 3,080 conversations over 60,000 turns, tracking changes in conviction, perceived opinion shift, and the AI's perceived humanness after each interaction.

Key findings reveal that LLMs are not universally persuasive. Their effectiveness is tightly linked to the psychological profile of the human user. Explainable AI (XAI) techniques identified that individuals more susceptible to AI-driven opinion change were characterized by higher trust in LLMs, greater agreeableness and extraversion, and a higher 'need for cognition.' The research also debunked a common stereotype, showing that both humans and LLMs frequently employed logical fallacies—approximately once every six conversational exchanges—countering the idea of LLMs as flawlessly logical systems.

The study's multiverse analysis with mixed-effects models confirmed strong individual differences in susceptibility. The framework's predictive power was highest for an LLM's perceived humanness (R²=0.44), followed by opinion change (R²=0.34). This work, led by authors including Alexis Carrillo and Emilio Ferrara, provides a critical, evidence-based framework for understanding how generative AI influences human opinions through psycho-social pathways rather than pure logical argumentation.

Key Points
  • The Talk2AI framework analyzed 3,080 longitudinal conversations (60,000 turns) between humans and four leading LLMs.
  • LLMs successfully changed opinions only in psychologically susceptible users, predicted by traits like high AI trust and agreeableness (R² up to 0.44).
  • Both humans and LLMs used fallacious reasoning in 1 out of every 6 conversational quips, challenging the 'LLMs as superior systems' notion.

Why It Matters

This research provides a critical lens for evaluating AI's real-world influence, showing persuasion hinges on user psychology, not just model capability.