Interpretative Interfaces: Designing for AI-Mediated Reading Practices and the Knowledge Commons
A new research paper argues for moving beyond XAI to let users directly manipulate a model's internal representations.
A new research paper from Gabrielle Benabdallah, accepted at the CHI 2026 workshop, challenges the status quo of Explainable AI (XAI). The paper, titled 'Interpretative Interfaces: Designing for AI-Mediated Reading Practices and the Knowledge Commons,' argues that current XAI methods, which simply explain a system's behavior, are insufficient for true understanding. For scientists and others who rely on LLMs for literature reviews and research, Benabdallah contends there is no way to directly engage with or probe how these models process and transform text. Explanation alone does not equal the ability to interact.
Benabdallah proposes a paradigm shift from 'explainability' to 'interpretative engagement.' Drawing inspiration from textual scholarship and historical reading practices like marginalia and annotation, she envisions interactive environments where non-expert users can manipulate a model's intermediate representations. Specifically, an interpretative interface would allow a user to select a single token (like a word or concept) and visually follow its changing semantic position as it moves through the model's layers. Users could then annotate these transformations, effectively 'reading' and inscribing their understanding onto the model's internal processes. The goal is to reframe AI interpretability as an interaction design project, paving the way for AI tools that support critical thinking and stewardship of knowledge, rather than opaque automation.
- Proposes a shift from passive Explainable AI (XAI) to active 'interpretative engagement' where users manipulate model internals.
- Envisions interfaces where users can select a token and trace its semantic trajectory through a model's layers for annotation.
- Frames AI transparency as an interaction design challenge, drawing on historical reading practices like marginalia and glosses.
Why It Matters
This could transform how researchers critically use LLMs, moving from trusting black-box outputs to actively interrogating how knowledge is processed.