"I followed what felt right, not what I was told": Autonomy, Coaching, and Recognizing Bias Through AI-Mediated Dialogue
AI nudges toward bias were often rejected, while inclusive prompts served as effective coaching scaffolds.
A team of researchers from Carnegie Mellon University and the University of Toronto, led by Atieh Taheri, will present a groundbreaking study at the ACM CHI 2026 conference. Their paper, titled "I followed what felt right, not what I was told," investigates how AI-mediated dialogue can influence a person's ability to recognize ableist microaggressions in social scenarios. The team developed an experimental platform where 160 participants engaged with different AI coaching styles after rating vignettes on standardness and emotional impact.
Participants were split into four conditions: AI nudging toward bias (Bias-Directed), nudging toward inclusion (Neutral-Directed), unguided dialogue (Self-Directed), and a text-only control (Reading). Quantitative results revealed that all dialogue-based conditions led to stronger bias recognition than the Reading group. However, the trajectories differed significantly; the Bias-Directed nudges improved users' ability to differentiate biased from neutral statements but also increased their overall negative sentiment toward the scenarios.
Qualitative analysis of user reflections provided crucial insights. Participants frequently rejected the AI's biased nudges, demonstrating a desire for autonomy. In contrast, the inclusive nudges from the Neutral-Directed condition were often adopted as helpful scaffolding, guiding users to conclusions they felt were their own. The study concludes that while AI can be an effective coach for bias recognition, the design of its nudges involves a critical trade-off between directive guidance and user autonomy, with inclusive prompts proving more effective for balanced learning.
- Study with 160 participants found AI dialogue improved recognition of ableist bias more than reading text alone.
- Biased AI nudges improved differentiation skills but increased negativity; inclusive nudges were adopted as helpful coaching.
- Researchers contributed a validated vignette corpus and an AI intervention platform with design implications for conversational systems.
Why It Matters
Provides a blueprint for designing AI coaching systems that effectively promote social awareness without undermining user autonomy.