Research & Papers

Multi-Agent Consensus as a Cognitive Bias Trigger in Human-AI Interaction

127-person experiment shows majority AI opinions inflate user confidence.

Deep Dive

A new study from researchers Soohwan Lee and Kyungho Lee, published on arXiv (2604.22277) and accepted at the ACM CHI 2026 Bias4Trust workshop, reveals that the way multiple AI agents agree or disagree can systematically distort human judgment—regardless of what they actually say. In a controlled experiment with 127 participants, the team tested three multi-agent configurations: Majority (most agents agree), Minority (a dissenting voice), and Diffusion (no clear consensus). The quantitative results are striking: majority consensus significantly accelerated opinion change and inflated user confidence, aligning with well-known social proof and bandwagon heuristics. In contrast, minority dissent slowed opinion shifts and encouraged more deliberate, critical engagement from users.

Qualitative analysis uncovered three distinct interpretive trajectories: reinforcing (users doubled down on initial views), aligning (users adjusted toward the consensus), and oscillating (users vacillated between positions). These patterns were shaped by how participants perceived agent independence and group dynamics over time. Crucially, the study demonstrates that the structure of agent agreement—whether they converge or diverge—functions as a bias-relevant signal in LLM interactions, independent of the actual content being discussed. As multi-agent AI systems proliferate in applications from customer support to collaborative decision-making, this work grounds multi-agent social influence as a concrete, designable source of bias that developers must consider to calibrate user trust appropriately.

Key Points
  • Majority consensus among AI agents accelerates opinion change and inflates user confidence, consistent with social proof heuristics.
  • Minority dissent slows opinion shifts and promotes more deliberative engagement from users.
  • Three interpretive trajectories identified: reinforcing, aligning, and oscillating, shaped by perceived agent independence.

Why It Matters

Designers of multi-agent AI systems must account for how consensus structure, not just content, biases user judgment.