AI Safety

Making AI Compliance Evidence Machine-Readable

New open-source SDK generates machine-readable compliance reports as a byproduct of model training.

Deep Dive

A team of researchers has published a paper proposing a concrete technical solution to a major hurdle in AI governance: generating machine-readable evidence for compliance. While frameworks like the EU AI Act, ISO/IEC 42001, and the NIST AI RMF specify what needs to be assured, they lack an executable format for how to do it. The researchers suggest adopting OSCAL (Open Security Controls Assessment Language), a NIST standard already used for FedRAMP cybersecurity compliance, as an interchange format for AI governance. They define 16 property extensions to cover AI-specific needs like lifecycle phases and risk traceability.

To operationalize this, the team presents a three-layer 'Compliance-as-Code' architecture comprising policy, evidence, and enforcement layers. This architecture is designed to generate the required assurance evidence automatically as a byproduct of the model training and development process. The accompanying open-source SDK produces native OSCAL Assessment Results, which are validated against the official NIST JSON schema. The approach was successfully tested on two Annex III high-risk systems under the EU AI Act: a credit scoring model and a medical imaging segmentation system, demonstrating practical applicability.

Key Points
  • Proposes using the existing NIST OSCAL standard, with 16 custom extensions, as a machine-readable format for AI governance evidence.
  • Introduces a 'Compliance-as-Code' architecture and open-source SDK that automatically generates validated OSCAL reports during model development.
  • Successfully tested on high-risk AI use cases (credit scoring & medical imaging) relevant to the EU AI Act and other major frameworks.

Why It Matters

Provides a scalable, automated method for companies to prove AI system compliance, reducing manual audit burdens and regulatory risk.