Research & Papers

Bureaucratic Silences: What the Canadian AI Register Reveals, Omits, and Obscures

Analysis of 409 government AI systems shows transparency tools may hide more than they reveal.

Deep Dive

A new academic paper titled 'Bureaucratic Silences: What the Canadian AI Register Reveals, Omits, and Obscures' delivers a critical analysis of Canada's first Federal AI Register, released in November 2025. Researchers from the University of Toronto and Cornell University applied the Algorithmic Decision-Making Adapted for the Public Sector (ADMAPS) framework to all 409 systems listed, combining quantitative mapping with qualitative coding. Their findings reveal a stark gap between the government's 'sovereign AI' rhetoric and bureaucratic reality, with the vast majority of systems (86%) focused on internal efficiency gains rather than transformative public services.

The study argues the register functions as an 'instrument of ontological design' that actively shapes what counts as accountable AI. By systematically omitting details about human discretion, staff training requirements, and uncertainty management protocols, the register constructs AI as reliable, technical tooling rather than sociotechnical systems involving contestable decisions. This design choice, the authors warn, risks automating accountability into a mere compliance exercise—offering the appearance of transparency (visibility) without enabling meaningful public scrutiny or challenge (contestability). The paper concludes that without fundamental redesign, such registers may perpetuate the very bureaucratic silences they purport to eliminate.

Key Points
  • Analysis of all 409 systems in Canada's Federal AI Register using the ADMAPS framework
  • 86% of AI systems are deployed internally for bureaucratic efficiency, not public-facing services
  • Register systematically omits human factors like discretion, training, and uncertainty management

Why It Matters

Highlights how government transparency initiatives can inadvertently obscure the human elements crucial for AI accountability.