AI Safety

Corporations Constitute Intelligence

A new legal paper argues corporate AI governance, like Anthropic's 79-page constitution, creates a 'political community deficit'.

Deep Dive

A new legal paper titled 'Corporations Constitute Intelligence' by scholar Gilad Abiri offers a sharp critique of Anthropic's ambitious 79-page constitution for its Claude AI model. Published as an arXiv preprint and slated for the California Law Review, the article argues that despite its philosophical sophistication, the document harbors two critical structural defects. First, it contains a significant loophole: models deployed to the U.S. military, such as Claude's integration in Palantir's Maven platform during strikes in Iran, operate under different rules, creating an ethical gap. Second, the constitution's very comprehensiveness is a problem, as it resolves profound questions about AI values and moral status that should remain open for public democratic deliberation.

Abiri's analysis highlights a stark disconnect between corporate governance and public will. He points to Anthropic's own 2023 experiment in participatory constitution-making, which found roughly a 50% divergence between publicly sourced principles and those authored by the company. The democratic version reportedly produced lower bias across nine social dimensions. The fact that the final 2026 constitution incorporated none of these findings underscores Abiri's central thesis: AI governance suffers from a 'political community deficit.' This deficit is the absence of any democratic body authorized to determine the foundational principles governing AI behavior. The paper concludes that corporate transparency, while admirable, is not a substitute for democratic legitimacy, raising urgent questions about who gets to define the ethical boundaries of increasingly powerful AI systems.

Key Points
  • Anthropic's 79-page Claude constitution has a military loophole, allowing different rules for models like those in Palantir's Maven platform.
  • A 2023 Anthropic experiment found a 50% divergence between public and corporate AI principles, with the public version showing lower bias.
  • The paper argues AI governance has a 'political community deficit,' where corporate rule-setting replaces democratic legitimacy.

Why It Matters

This critique challenges the core premise of industry self-governance, arguing the public must have a direct say in AI's ethical foundations.