AI Safety

Governing frontier general-purpose AI in the public sector: adaptive risk management and policy capacity under uncertainty through 2030

Academic paper argues current static compliance models are inadequate for rapidly advancing frontier AI systems.

Deep Dive

A new research paper by Fabio Correa Xavier, published on arXiv, tackles the critical challenge of governing frontier general-purpose AI in the public sector through the year 2030. The paper argues that AI governance has evolved from a technical performance issue to a fundamental problem of institutional design. It highlights a growing 'evidence dilemma' where AI capabilities from models like GPT-5 and Claude 4 are advancing rapidly, while comprehensive knowledge about potential harms, effective safeguards, and successful interventions remains partial and lags behind. This creates a high-stakes policy environment where governments must make crucial decisions under significant uncertainty.

To address this, the paper proposes moving away from traditional, static compliance-based governance models. Instead, it advocates for an adaptive risk management framework built on principles of scenario-aware regulation and sociotechnical transformation. The framework, informed by analysis from the International AI Safety Report 2026 and OECD policy documents, emphasizes that successful AI adoption in government depends heavily on organizational redesign, public-sector institutional dynamics, and data collaboration capacity. It calls for integrating continuous capability monitoring, risk tiering, conditional controls, and standards-based interoperability to create governance mechanisms robust enough to handle divergent technological futures through the next decade.

Key Points
  • Identifies an 'evidence dilemma' where AI capability growth outpaces understanding of risks and safeguards.
  • Proposes adaptive governance with scenario-aware regulation, moving beyond static compliance models.
  • Emphasizes that public sector AI success depends on organizational redesign and data collaboration, not just technology.

Why It Matters

Provides a concrete framework for policymakers to manage AI risks like disinformation and bias as models grow more powerful.