Algorithmic Administration and the EU AI Act: Legal Principles for Public Sector Use of AI
How the EU AI Act governs AI in benefits, migration, and law enforcement...
A new academic paper by Georgios Pavlidis and Ioannis Kastanas, published in the Journal of Ethics and Legal Technologies, analyzes the intersection of the EU AI Act with fundamental principles of administrative law. The study focuses on how public sector deployment of high-risk AI systems—particularly in sensitive domains like social benefits, migration, education, and law enforcement—must align with legal principles such as administrative discretion, the duty to state reasons, and proportionality. The authors argue that while the AI Act introduces a risk-based regulatory framework, it may not fully ensure accountability, transparency, and reviewability in automated public decision-making.
The paper proposes specific safeguards and interpretative strategies to bridge gaps between the AI Act's obligations and existing administrative law standards. It explores whether the Act's approach adequately addresses challenges like algorithmic opacity and the potential for disproportionate impacts on citizens. By examining case studies from these sensitive sectors, the authors highlight where the Act succeeds and where it falls short, offering recommendations for ethical and lawful AI deployment in the public sector. This work is particularly timely as governments increasingly adopt AI for critical decisions affecting individuals' rights and access to services.
- Examines EU AI Act's interaction with administrative law principles like discretion, duty to state reasons, and proportionality
- Focuses on high-risk AI systems in social benefits, migration, education, and law enforcement
- Proposes safeguards and interpretative strategies to ensure accountability and transparency in automated public decision-making
Why It Matters
This paper guides how governments can deploy AI legally and ethically in critical public services.