Research & Papers

Attention: What Prevents Young Adults from Speaking Up Against Cyberbullying in an LLM-Powered Social Media Simulation

Three attention shifts are needed before practice helps bystanders intervene publicly.

Deep Dive

A team of researchers from Cornell University created "Upstanders' Practicum," a multi-AI-agent social media simulation powered by large language models (LLMs), to study what prevents young adults from speaking up publicly against cyberbullying. The simulation, built on the newly open-sourced Truman Agents platform, allowed 34 participants to practice bystander intervention across three iteratively refined versions. Unlike traditional skill-based training, the study focused on the psychological and social barriers that inhibit public intervention even when individuals want to help.

The key finding was that practice in the simulation only became effective after participants underwent three distinct attention shifts. First, they had to move from inattention to truly paying attention to the cyberbullying incident. Second, they shifted from self-focus (e.g., "I don't usually do this") to focusing on the needs of the victim and bully. Third, they transitioned from seeing the problem as a private conflict (e.g., "I could set up a meeting between them") to recognizing that public comments are about establishing social norms. Only after these shifts did participants craft tactful public messages and see a reason to intervene. The researchers argue that future bystander education should design for attention shifts and foster an "upstander identity" rather than just teaching social skills.

Key Points
  • Researchers created Upstanders' Practicum, a multi-LLM-agent simulation for 34 young adults to practice public bystander intervention.
  • Three attention shifts required: inattention to true attention, self-focus to other-focus, and private conflict to public norm-setting.
  • The Truman Agents platform is open-sourced for future cyberbullying and social media research.

Why It Matters

LLM-powered simulations can address psychological barriers to speaking up, enabling more effective anti-cyberbullying training.