Research & Papers

Investigating Vaccine Buyer's Remorse: Post-Vaccination Decision Regret in COVID-19 Social Media Using Politically Diverse Human Annotation

AI analysis of 1.6M YouTube comments finds vaccine regret concentrated in skeptic communities.

Deep Dive

A team from Rochester Institute of Technology (RIT) led by Ashiqur R. KhudaBukhsh has published a novel study analyzing post-vaccination regret in COVID-19 social media discourse. The researchers created a benchmark dataset from 1.6 million YouTube comments across 1,000 news videos, using politically diverse human annotators to label expressions of 'vaccine buyer's remorse.' This approach addresses the subjective and often politicized nature of vaccine sentiment analysis.

Using large language models (LLMs) including GPT-4 and Claude 3, the team quantified vaccine regret prevalence and analyzed its characteristics. They found that while vaccine regret appears in less than 2% of public discourse, it's disproportionately concentrated in vaccine-skeptic influencer communities. The study revealed that first-person narratives citing adverse health events were the primary vehicle for expressing regret, with vicarious experiences (regret based on others' stories) being less common.

The research also examined potential biases introduced by different LLMs in detecting vaccine regret, finding variations in sensitivity and specificity across models. The politically diverse annotation panel helped create more robust ground truth data, addressing concerns about ideological bias in AI training data for sensitive public health topics. The dataset and methodology represent a significant advancement in understanding how AI can be used to analyze complex public health sentiments in social media.

Key Points
  • Analyzed 1.6M YouTube comments using LLMs (GPT-4, Claude 3) to detect vaccine regret
  • Found vaccine regret in <2% of discourse, concentrated in skeptic communities
  • Used politically diverse human annotation to reduce bias in sensitive topic analysis

Why It Matters

Shows how AI can analyze public health sentiment while addressing political bias in training data.