AI and Collective Decisions: Strengthening Legitimacy and Losers' Consent
New AI system uses personal stories to help people accept outcomes they disagree with, increasing perceived fairness.
A research team from MIT and other institutions has developed a novel AI system aimed at solving a core problem in digital democracy: how to maintain legitimacy and foster 'losers' consent' when people don't get their preferred outcome. The system works in two stages. First, a semi-structured AI interviewer elicits detailed personal experiences and beliefs from participants on a given policy topic. Second, an interactive visualization displays the predicted aggregate policy support alongside anonymized, voice-clips of the experiences shared by others. This design grounds abstract policy debates in the concrete, human stories of the community.
In a randomized controlled experiment with 181 participants, the tool demonstrated significant impact. All participants were exposed to a collective decision that contradicted their stated preference. Those who interacted with the AI-generated visualization reported higher levels of perceived procedural legitimacy, greater trust in the outcome, and a better understanding of opposing perspectives compared to the control group. The research, published on arXiv, shifts the focus of AI in governance from pure scaling and efficiency to building social cohesion and trust, proving that technology can help people accept disagreeable outcomes as fair by fostering empathy and connection.
- Uses an AI interviewer to gather personal stories, then visualizes them alongside policy predictions.
- In a study of 181 people, it increased perceived fairness and trust even when users 'lost'.
- Shifts AI's role in democracy from scaling decisions to strengthening social cohesion and legitimacy.
Why It Matters
Provides a blueprint for using AI to reduce polarization and build trust in digital democratic tools, from local town halls to online platforms.