Research & Papers

Mapping Election Toxicity on Social Media across Issue, Ideology, and Psychosocial Dimensions

New research uses LLMs to analyze 5 weeks of election tweets, revealing where online hate concentrates.

Deep Dive

A team of researchers from USC and KAIST has published a comprehensive study mapping toxic political discourse on social media during the 2024 U.S. presidential election. The paper, titled 'Mapping Election Toxicity on Social Media across Issue, Ideology, and Psychosocial Dimensions,' analyzed five weeks of discourse on X (Twitter) using a novel methodology. They employed a human-in-the-loop LLM-assisted process to categorize posts into 10 major campaign issues, estimate poster ideology, and detect harmful content with an LLM-based toxicity model. This allowed for a fine-grained examination of how toxicity varies by context, moving beyond blanket measurements.

The results reveal significant issue heterogeneity in online toxicity. Identity-related issues like immigration and race displayed the highest toxicity intensity, while economic issues were less toxic. In terms of specific harms, harassment was the most prevalent and intense across most issues, whereas hate speech concentrated heavily in identity-centered debates. The study also found that partisan posts contained more harmful content than neutral posts, though ideological asymmetries in toxicity varied by issue domain, challenging simple left-right narratives.

A key psychosocial finding is that toxic discourse is dominated by high-arousal negative emotions like anger and disgust. Interestingly, left- and right-leaning posts often exhibited similar emotional profiles within the same issue, suggesting a phenomenon of emotional mirroring. Furthermore, partisan groups frequently relied on overlapping moral foundations (like care and fairness), with the issue context strongly determining which foundations became most salient. This underscores that online political toxicity is highly context-dependent, necessitating issue-sensitive approaches for measurement and mitigation, rather than one-size-fits-all content moderation.

Key Points
  • Identity-related campaign issues (e.g., immigration, race) showed the highest toxicity intensity, being significantly more toxic than economic issues.
  • Harassment was the most prevalent and intense harm category across most issues, while hate speech was highly concentrated in identity debates.
  • Toxic discourse is driven by high-arousal negative emotions (anger/disgust), with left and right posts showing emotional mirroring within the same issue context.

Why It Matters

Provides data-driven insights for platforms and policymakers to develop nuanced, issue-specific content moderation strategies instead of blunt tools.