OpenAI’s “compromise” with the Pentagon is what Anthropic feared
OpenAI secured military contract by citing existing laws, not setting new prohibitions like Anthropic attempted.
OpenAI has secured a controversial agreement with the US Department of Defense, allowing military use of its AI technologies in classified operations. The deal, announced February 28, came after the Pentagon publicly reprimanded Anthropic for its refusal to negotiate similar terms. CEO Sam Altman acknowledged negotiations were "definitely rushed," but emphasized OpenAI did not accept the same terms Anthropic rejected. The company published a blog post outlining protections against autonomous weapons development and mass domestic surveillance, positioning itself as having achieved both the contract and moral high ground through a pragmatic legal approach rather than Anthropic's principled stand.
Technically, OpenAI's agreement relies on citing existing laws and policies—including the Fourth Amendment and a 2023 Pentagon directive on autonomous weapons—rather than establishing new contractual prohibitions. Legal expert Jessica Tillipman notes the published excerpt "does not give OpenAI an Anthropic-style, free-standing right to prohibit otherwise-lawful government use." OpenAI claims a secondary defense through embedding safety rules directly into model behavior, preventing stripped-down military versions. However, the company hasn't specified how military safety rules differ from civilian protections, and enforcement challenges remain significant in classified settings where oversight is limited. This deal represents a fundamental shift in how leading AI companies engage with government contracts, prioritizing legal frameworks over explicit ethical boundaries.
- OpenAI's deal allows Pentagon use in classified settings by citing existing laws rather than creating new prohibitions
- The approach contrasts with Anthropic's failed negotiations that focused on specific contractual bans on autonomous weapons and surveillance
- OpenAI claims it can embed safety rules into model behavior but hasn't detailed how military protections differ from civilian versions
Why It Matters
Sets precedent for AI-military partnerships using legal frameworks over ethical prohibitions, potentially accelerating defense AI adoption with limited oversight.