Revealing Positive and Negative Role Models to Help People Make Good Decisions
New algorithm selectively reveals positive/negative influencers to maximize social welfare with limited disclosure budget.
A research team from Carnegie Mellon University and Toyota Technological Institute at Chicago has published a groundbreaking AI paper titled 'Revealing Positive and Negative Role Models to Help People Make Good Decisions.' The study introduces a novel algorithmic framework where a social planner with complete information about network labels can selectively disclose whether role models are positive or negative to influence agent behavior. This intervention operates under realistic constraints—the planner has a limited disclosure budget and must strategically allocate revelations to maximize social welfare, defined as the expected number of agents who emulate adjacent positive role models. The research addresses complex technical challenges including broken submodularity when revealing negative targets and provides fairness guarantees across different demographic groups.
The technical innovation centers on a proxy welfare function that maintains submodularity even when revealing negative role models, enabling constant-factor approximation algorithms when each agent has at most a constant number of negative neighbors. The team extended their basic model to include direct intervention approaches that connect high-risk agents to positive role models and coverage radius models that expand visibility of selected positive influencers. Extensive experiments across four real-world datasets validated both theoretical results and practical effectiveness, showing significant welfare improvements through strategic information disclosure. This work bridges algorithmic game theory with practical social network interventions, offering mathematically rigorous tools for platforms and policymakers to design more effective behavior-influence systems.
- Algorithm achieves constant-factor approximation for welfare maximization even with negative role model disclosures
- Framework guarantees each demographic group's welfare stays within constant factor of optimal group-specific allocation
- Validated on four real-world datasets with extensions for direct interventions and expanded visibility models
Why It Matters
Provides algorithmic foundation for social platforms and policymakers to design more effective behavior-influence systems with fairness guarantees.