Decision Trace Schema for Governance Evidence in Real-Time Risk Systems
New JSON Schema bridges 4 infrastructure layers to create a complete audit trail for AI decisions.
Researcher Oleg Solozobov has introduced the Decision Event Schema (DES), a new JSON Schema specification designed to solve a critical gap in AI governance: the 'Fragmented Trace Problem.' Automated systems generate logs across multiple layers—like ML inference and policy rules—but no single format captures the complete story of how a decision was made. DES bridges four key infrastructure layers (ML inference, rule/policy evaluation, cross-system coupling, and governance metadata) within one unified per-decision event structure. This provides a holistic audit trail that is currently missing.
The schema employs a 'degradation-aware' field design, where six top-level field groups map to specific governance evidence properties and the types of data loss they must resist. It defines ten required root-level fields and, crucially, introduces a practical, tiered evidence strategy with 'lightweight,' 'sampled,' and 'full' tiers. This allows organizations to scale the completeness of their logging to match the risk and throughput of different decisions. An analysis confirms DES is compatible with high-throughput systems and is the only evaluated specification covering all four layers simultaneously. It serves as a direct reference for practitioners and a clear mapping tool for regulators linking requirements to evidence.
- Solves the 'Fragmented Trace Problem' by unifying logs from 4 infrastructure layers (ML, rules, coupling, metadata) into one JSON Schema.
- Introduces a tiered evidence strategy (lightweight, sampled, full) to match logging detail to decision risk and system throughput.
- Evaluation shows it's the only schema covering all four layers, offering a direct path to regulatory compliance for automated systems.
Why It Matters
Provides a standardized, actionable blueprint for companies to audit AI decisions and prove compliance with emerging regulations like the EU AI Act.