Arknights: Playable Explanation and Player Agency under Opacity
Research shows the mobile game's AI companion PRTS trains players to act on unverifiable explanations.
A new research paper by Shuai Guo, titled "Arknights: Playable Explanation and Player Agency under Opacity," uses the popular mobile tower defense game as a living lab for human-AI interaction. The study conducts a qualitative close reading of the game's diegetic AI companion, PRTS (Preliminary R.I.I.C. Terminal System), which guides players through complex missions. Guo's analysis reveals that PRTS provides explanations that are sufficient to initiate player action but deliberately insufficient to verify or fully understand the underlying causality. This design, featuring incomplete information and narrative disruptions of trust, effectively trains millions of players to operate effectively within a system they cannot fully see or control.
The paper argues that this interactive model moves beyond traditional Explainable AI (XAI) approaches focused on transparency and visualization. Instead, Arknights demonstrates 'explanatory agency'—a mode where user competence is reorganized toward abductive reasoning and interpretation, rather than direct control or complete understanding. This finding challenges the prevailing XAI paradigm that prioritizes making AI systems fully interpretable. For AI designers, the game offers a proven blueprint for building interfaces where users can collaborate effectively with opaque AI, like large language models (LLMs), by learning to act on probabilistic guidance and manage uncertainty.
- The study analyzes Arknights' AI system PRTS, which provides 'usable but unverifiable' explanations to guide player action.
- Player agency shifts from direct control to 'explanatory agency,' relying on abductive reasoning with incomplete information.
- The research offers a new framework for XAI design, suggesting effective human-AI collaboration can exist without full system transparency.
Why It Matters
It provides a model for designing AI assistants we can use effectively, even when we can't fully understand their 'black box' reasoning.