The Imbalanced User-AI Relationships as an Ethical Failure of Front-End Design in Healthcare AI
Patients are visible to AI but cannot understand or question it, says new paper.
A new paper by Maureen Mghambi Mwadime, accepted at the CHI 2026 workshop on ethics at the front-end, shifts the ethical focus in healthcare AI from back-end concerns like bias and explainability to the often-overlooked front-end interface where patients and clinicians interact with AI outputs. The study introduces the concept of 'asymmetric legibility,' where patients become highly visible to AI systems through data inference but cannot understand, question, or influence how they are represented. This imbalance is identified as a distinct class of ethical failure.
Through a chat-based telemedicine case, the paper demonstrates how design choices—such as default recommendations, restricted user inputs, and suppressed uncertainty—undermine patient agency, clinician judgment, and human oversight, even when the AI system is technically accurate. To address this, Mwadime proposes 'reciprocity' as a design orientation, offering interventions for more balanced and participatory user-AI relationships in healthcare. The work highlights a critical gap in current AI ethics discourse, urging designers to consider how interface decisions can perpetuate power asymmetries.
- Paper identifies 'asymmetric legibility' where patients are visible to AI but cannot understand or influence their data representation.
- Design choices like default recommendations and suppressed uncertainty undermine agency and clinician judgment, per a telemedicine case study.
- Proposes 'reciprocity' as a design orientation for more balanced user-AI relationships in healthcare.
Why It Matters
For healthcare AI, this means ethical design must prioritize user agency and transparency at the interface level.