Research & Papers

Automated versus Human Engagement: Mapping Cognitive Bias Triggers in Online Discourse

Automated accounts embed emotional and dissonance triggers to drive engagement, but stacking too many backfires.

Deep Dive

A new study from Carnegie Mellon researchers (Ng, Zhou, Carley) presents a computational framework to detect eight cognitive bias triggers across 3.5 million contested COVID-19 posts. The team operationalized psychological heuristics — such as affective (emotional), cognitive dissonance (stance-shifting), authority, and availability (repetition) — into measurable data proxies. They then compared how often automated accounts (bots) versus human users embedded these triggers and how the triggers correlated with audience engagement.

Key findings: Bots embed bias triggers more frequently than humans, but the relationship with engagement is source-dependent. In bot-authored posts, affective and cognitive dissonance triggers strongly boost engagement, while authority and repetition cues correlate with lower interaction. Crucially, the study reveals a 'heuristic compounding' limit: when bots stack multiple bias triggers in a single post, positive engagement correlation declines. Human-authored content shows no such drop-off, remaining structurally resilient to high trigger density. This work bridges computational social science and cognitive psychology to show how source identity shapes information diffusion mechanics.

Key Points
  • Bots embed cognitive bias triggers more frequently than human users across 3.5 million COVID-19 social media posts.
  • Affective and cognitive dissonance triggers boost bot post engagement; authority and repetition cues reduce it.
  • Bot engagement declines when multiple biases are stacked in one post (heuristic compounding), while human posts remain resilient.

Why It Matters

Reveals how AI-generated content weaponizes psychological shortcuts to spread misinformation — and its limits.