AI Safety

The EU AI Act and the Rights-based Approach to Technological Governance

New analysis shows how the EU's landmark legislation transforms rights from aspirational goals into enforceable legal triggers.

Deep Dive

A new academic paper by Georgios Pavlidis provides a critical analysis of the landmark EU AI Act, arguing it establishes a rights-based framework that could reshape global AI governance. Published on arXiv and in the Review of European and Comparative Law, the paper examines how the Act places fundamental rights at the heart of its risk-based approach, explicitly and implicitly embedding protections from the EU Charter of Fundamental Rights. Unlike previous regulatory models, the Act transforms rights from aspirational principles into concrete legal thresholds that trigger specific compliance requirements.

The analysis suggests that fundamental rights now function as procedural triggers across the entire AI system lifecycle—from development and training to deployment and monitoring. This means companies building high-risk AI systems (like those used in hiring, law enforcement, or critical infrastructure) must demonstrate rights compliance at each stage. The paper positions the EU AI Act as a potential global model for human-centric AI regulation, though it acknowledges significant challenges will emerge during implementation, particularly around enforcement mechanisms and technical standards for rights preservation.

Key Points
  • The EU AI Act transforms fundamental rights from goals into enforceable legal thresholds across AI system lifecycles
  • Rights protections from the EU Charter are embedded as procedural triggers in the Act's risk-based framework
  • The legislation could serve as a global model for rights-preserving AI despite implementation challenges

Why It Matters

For AI developers and companies, this means mandatory rights compliance becomes part of the technical development process, not just an ethical consideration.