Startups & Funding

No one has a good plan for how AI companies should work with the government

Sam Altman's public Q&A backfires as OpenAI takes controversial military contract from Anthropic.

Deep Dive

OpenAI's decision to accept a Pentagon contract, which rival Anthropic had just walked away from due to ethical concerns over surveillance and automated weaponry, sparked a public relations crisis for CEO Sam Altman. In a hastily arranged Q&A on X, Altman defended the move by deferring to democratic processes and elected officials, stating it wasn't his role to set national policy. However, the backlash from users and employees revealed a significant disconnect, with many questioning whether unelected tech companies or governments should hold more power. This incident marks a pivotal moment as OpenAI transitions from a consumer-focused startup to a piece of critical national security infrastructure, a role for which it appears unprepared.

The controversy is compounded by the U.S. government's aggressive stance, with Defense Secretary Pete Hegseth threatening to designate Anthropic as a supply-chain risk—a move that could cripple the company by cutting off access to hardware and partners. This unprecedented threat against an American AI firm sends shockwaves through the industry, revealing a lack of established protocols for government-AI company collaboration. Both sides—tech giants needing massive capital and the government needing advanced AI—are now forced into a more serious engagement, but the rules of engagement remain dangerously undefined, putting ethical boundaries and corporate survival in jeopardy.

Key Points
  • OpenAI accepted a Pentagon contract that Anthropic rejected over ethical concerns about mass surveillance and automated killing.
  • CEO Sam Altman's public Q&A defense, deferring to democratic processes, backfired, revealing significant public and internal dissent.
  • The U.S. government threatened to designate Anthropic as a supply-chain risk, an unprecedented move that could destroy the company.

Why It Matters

The lack of clear rules for AI-government collaboration creates ethical, legal, and business risks for the entire tech industry.