Agentic Explainability at Scale: Between Corporate Fears and XAI Needs
New research tackles 'Agent Sprawl' with design-time and runtime explainability tools for AI agents.
As enterprises rapidly adopt low-code platforms to build autonomous AI agents, a dangerous governance gap is emerging. Researchers Yomna Elsayed and Cecily Jones identify this as 'Agent Sprawl'—a scenario where companies scale their use of agentic AI without a comparable scaling of their governance processes and internal expertise. This leads to significant corporate fears around the autonomy and inherent risks of these systems. The paper, presented at the HCXAI workshop at CHI 2026, argues that while tools exist to discover 'shadow AI' agents, few provide the necessary observability into an agent's configuration, settings, or the logic behind its communication and orchestration with other agents.
To address these concerns, the research explores the specific needs of AI governance professionals and proposes a dual-pronged approach using both design-time and runtime explainability (XAI) techniques. The goal is to make the 'black box' of agentic systems more transparent and auditable. As a practical step towards this, the authors provide a preliminary prototype for an 'Agentic AI Card.' This card is designed to document critical information about an AI agent's purpose, configuration, and decision-making processes, serving as a standardized tool to help companies feel more confident and in control when deploying agents across their organization.
- Identifies 'Agent Sprawl' as a key risk when agentic AI adoption outpaces governance.
- Highlights a lack of tools for observing agent configuration and multi-agent decision-making.
- Proposes a prototype 'Agentic AI Card' for standardizing agent documentation and explainability.
Why It Matters
Provides a framework for safe, scalable enterprise AI agent deployment, addressing critical governance and compliance needs.