Research & Papers

Examining the Effect of Explanations of AI Privacy Redaction in AI-mediated Interactions

Research with 180 participants finds explanations boost perceived privacy effectiveness by 30%.

Deep Dive

A new study from Carnegie Mellon University and the University of Washington reveals that explanations are critical for building trust in AI systems that redact private information during digital conversations. The research, led by Roshni Kaushik, Maarten Sap, and Koichi Onoue, examined how 180 participants responded to an AI mediator that removed sensitive content from messages. The system generated explanations of varying detail to communicate its privacy decisions, with results showing participants believed the system was 30% more effective at preserving privacy when explanations were provided (p<0.05, Cohen's d≈0.3).

Context proved crucial: participants relied more on explanations and found them more helpful when the system performed extensive redactions (p<0.05, Cohen's f≈0.2). The study also uncovered that individual differences matter—factors like age and baseline familiarity with AI significantly affected user trust levels. These findings highlight the delicate balance between transparency and privacy in AI-mediated communications, suggesting that one-size-fits-all explanations won't work for building trustworthy systems.

The research, currently under review at FAccT 2026, demonstrates that adaptive, context-aware explanations are essential for designing privacy-aware AI systems. As AI mediators become more common in sensitive domains like healthcare, finance, and personal messaging, this work provides concrete evidence that simply redacting information isn't enough—users need to understand why content was removed to maintain trust in the system.

Key Points
  • Explanations increased perceived privacy effectiveness by 30% (Cohen's d≈0.3) in study with 180 participants
  • Context matters: explanations were most valuable during extensive redactions (Cohen's f≈0.2)
  • Individual factors like age and AI familiarity significantly affected trust levels

Why It Matters

As AI mediates sensitive conversations, this research shows transparent explanations are essential for user trust and adoption.