Resisting Humanization: Ethical Front-End Design Choices in AI for Sensitive Contexts
A new paper challenges the trend of making AI seem human, especially in high-stakes contexts like trauma support.
A new research paper from academics Silvia Rossi, Diletta Huyskes, and Mackenzie Jorgensen shifts the ethical AI debate from back-end algorithms to front-end design. Published for the CHI 2026 workshop, "Resisting Humanization: Ethical Front-End Design Choices in AI for Sensitive Contexts" argues that the common industry practice of making AI interfaces seem human—through dialogue, personality, and emotive language—is a value-driven choice with significant consequences. The authors contend these design elements actively shape user mental models and can dangerously misalign trust, particularly for vulnerable populations.
The analysis is grounded in a real-world case study from Chayn, a nonprofit supporting survivors of gender-based violence. Chayn's trauma-informed design principles deliberately avoid humanizing their AI systems, practicing 'principled restraint' to prevent fostering misplaced trust or undermining user autonomy. This stands in stark contrast to the engagement-driven norms of mainstream conversational AI. The paper concludes that ethical front-end design is a form of 'procedural ethics,' enacted through deliberate interaction choices rather than being solely embedded in a model's training data or logic.
- The paper argues humanizing AI interfaces (e.g., with personality, emotive language) is an ethical design choice, not a neutral one.
- It warns these features can misalign user trust and undermine autonomy, especially in sensitive contexts like trauma support.
- Uses Chayn's trauma-informed AI for gender-based violence survivors as a case study for 'principled restraint' in design.
Why It Matters
Challenges the core design philosophy of most consumer AI, urging caution for high-stakes applications in healthcare, finance, and counseling.