Explainable AI for Blind and Low-Vision Users: Navigating Trust, Modality, and Interpretability in the Agentic Era
New study reveals blind users often blame themselves for AI failures, highlighting a critical trust gap.
A team of researchers, including Abu Noman Md Sakib, Protik Dey, Zijie Zhang, and Taslima Akter, has published a pivotal paper titled 'Explainable AI for Blind and Low-Vision Users: Navigating Trust, Modality, and Interpretability in the Agentic Era.' The paper, submitted to the Human-centered Explainable AI Workshop at CHI 2026, investigates the unique challenges faced by blind and low-vision (BLV) users as AI systems evolve from simple tools into autonomous agents. These agents, capable of multi-step actions and consequential decisions, pose a significant risk where a single undetected error can propagate irreversibly before feedback is available, making trust and understanding paramount.
Through comprehensive analysis of user interviews and contemporary research, the study uncovers a critical 'modality gap.' While BLV users highly value conversational explanations, current Explainable AI (XAI) development remains overwhelmingly visual, excluding them from independent use of AI-driven assistive technologies. A key, troubling finding is that BLV users frequently experience 'self-blame' when AI systems fail, indicating a severe breakdown in trust and system accountability.
The paper concludes by proposing a concrete research agenda to bridge this gap. It advocates for the development of multimodal interfaces that go beyond visual cues, the design of 'blame-aware' explanations that clarify system limitations, and a fundamental shift towards participatory development that includes BLV users in the design process. This work is a crucial call to action for the AI community to ensure the benefits of agentic AI are accessible to all, not just those who can see its outputs.
- Identifies a critical 'modality gap' where current XAI is too visual, excluding blind and low-vision users.
- Reveals BLV users often experience 'self-blame' for AI failures, highlighting a severe trust and accountability issue.
- Proposes a research agenda for agentic AI, focusing on multimodal interfaces and participatory design with BLV communities.
Why It Matters
As AI becomes more autonomous, ensuring it is trustworthy and accessible for all users is a fundamental ethical and design imperative.