Research & Papers

A User-Centric Analysis of Explainability in AI-Based Medical Image Diagnosis

Half of physicians trusted false AI diagnoses over all tested XAI methods...

Deep Dive

Researchers Julia Wagner and Tim Schlippe conducted a user-centric analysis of explainability in AI-based medical image diagnosis, surveying 33 physicians. The study compared state-of-the-art textual, visual, and multimodal XAI methods to determine which format best helps doctors trust and verify AI decisions. Results showed that 88% of physicians consider AI explanations important (64% strongly agree), but the method matters greatly: a combination of bounding box highlighting the region of interest plus a textual report rated highest across understandability, completeness, speed, and applicability.

The most concerning finding: when presented with false AI diagnoses, 50% of participants still trusted the incorrect AI output over any of the tested explanation methods. This reveals a dangerous over-reliance on AI even when explanations are provided, underscoring the need for better XAI design that actively supports critical evaluation rather than passive acceptance. The paper was presented at the 4th International Workshop on eXplainable Artificial Intelligence in Healthcare in Pavia, Italy.

Key Points
  • 88% of 33 surveyed physicians said AI must explain its diagnosis; 64% strongly agreed.
  • Bounding box + report combo outperformed other XAI methods in understandability, completeness, speed, and applicability.
  • 50% of participants trusted false AI diagnoses over any tested XAI method, indicating critical over-reliance issues.

Why It Matters

Without better explainability, doctors may blindly trust flawed AI, undermining patient safety and clinical adoption.