Research & Papers

Characterizing AI Manipulation Risks in Brazilian YouTube Climate Discourse

Researchers find AI can exploit psychological traits to spread climate denialism, releasing a 2.7M comment dataset.

Deep Dive

A team of researchers, including Wenchao Dong, Marcelo S. Locatelli, Virgilio Almeida, and Meeyoung Cha, published a study at AAAI 2026's Special Track on AI for Social Impact titled 'Characterizing AI Manipulation Risks in Brazilian YouTube Climate Discourse.' The research investigates how climate-related narratives evolve on visual platforms like YouTube, focusing on Brazil—a geopolitically crucial player in global environmental negotiations. Through three case studies, the team analyzed a massive dataset of 226,000 Brazilian YouTube videos and 2.7 million user comments to understand the mechanics of audience engagement and persuasion.

The study makes three key contributions. First, it identifies which specific psychological content traits most effectively drive user engagement on the platform. Second, it quantifies the extent to which these traits influence a video's popularity. Third, and most critically, it demonstrates how these insights could inform the design of persuasive synthetic campaigns—such as climate denialism—using recent generative language models like GPT-4 or Claude. The research essentially maps a blueprint of vulnerabilities that bad actors could exploit with AI to manipulate public opinion on a critical issue.

In a significant move for transparency and future research, the authors have publicly released their annotated dataset. This resource includes fine-grained labels for persuasive strategies, theory-of-mind categorizations of user responses, and typologies of content creators (e.g., individuals, politicians, NGOs). This dataset provides a foundation for other researchers to study digital climate communication and the ethical risks of algorithmically amplified narratives, helping the community build defenses against AI-powered disinformation.

Key Points
  • Study analyzes 226,000 YouTube videos and 2.7 million comments from Brazilian climate discourse to identify manipulation risks.
  • Finds generative AI models like GPT-4 could be used to design effective synthetic campaigns promoting climate denialism.
  • Publicly releases a large, annotated dataset to support future research on algorithmic amplification and ethical AI use.

Why It Matters

This research exposes how AI can weaponize social media psychology, threatening evidence-based climate policy with synthetic disinformation campaigns.