The Perceptual Gap: Why We Need Accessible XAI for Assistive Technologies
New paper reveals XAI methods fail users with sensory disabilities, creating dangerous 'perceptual gaps'.
A new research paper by Shadab H. Choudhury, accepted as a poster for CHI '26, sounds a critical alarm about the state of Explainable AI (XAI) for assistive technologies. The paper, titled 'The Perceptual Gap: Why We Need Accessible XAI for Assistive Technologies,' argues that while AI systems like image describers and speech captioning tools are widely used by people with sensory disabilities, the methods that explain *why* these black-box models produce certain outputs are fundamentally inaccessible. This creates a dangerous disconnect where users must trust AI decisions without understanding the reasoning, which can lead to misinformation or missed critical details.
The research surveys existing XAI work and finds a near-total absence of accessibility-centered design or evaluation. Typical XAI outputs—like visual heatmaps on images or complex textual justifications—are often incomprehensible to users who are blind, have low vision, or are deaf/hard of hearing. The paper proposes that future XAI development must adopt a human-centered, accessibility-first approach, exploring multi-modal explanations (e.g., tactile, auditory, simplified language) tailored to user needs. This shift is essential for building trustworthy, safe, and equitable AI systems that serve all users, not just those without disabilities.
- Survey finds near-zero XAI research accounts for users with sensory disabilities like blindness or deafness.
- Standard XAI outputs (e.g., visual heatmaps, complex text) are often unusable for disabled users who rely on AI for perception.
- Proposes new field of 'Accessible Human-Centered XAI' to build trustworthy, multi-modal explanations for assistive tech.
Why It Matters
Ensures AI assistive tools are trustworthy and safe for millions of users who depend on them daily.