AI Safety

Sam Altman says OpenAI shares Anthropic's red lines in Pentagon fight

Sam Altman draws same ethical lines as Anthropic, complicating Pentagon's AI plans and sparking industry-wide stand.

Deep Dive

OpenAI CEO Sam Altman has publicly aligned his company with rival Anthropic's ethical stance in the ongoing conflict with the Pentagon, declaring shared "red lines" against using AI for mass surveillance or autonomous lethal weapons. In a memo to staff obtained by Axios, Altman stated this is now an industry-wide issue, emphasizing that humans must remain in the loop for high-stakes automated decisions. This solidarity complicates the Pentagon's efforts to replace Anthropic's Claude model, which was integrated into sensitive military work, and marks the first collective stand by the nation's top AI leaders on government use of their technology. However, Altman clarified he still wants to strike a deal to deploy ChatGPT in classified military systems, potentially positioning OpenAI as a replacement if the Pentagon declares Anthropic a "supply chain risk."

The proposed deal would allow military use of OpenAI models for "all lawful purposes" except those unsuited to cloud deployments, specifically domestic surveillance and autonomous offensive weapons. OpenAI's enforcement ideas include continuous security monitoring, cleared researchers to track usage, and technical safeguards confining models to cloud environments rather than edge devices like weapons. These proposals may face the same Pentagon resistance Anthropic encountered, with officials criticizing excessive private company influence over critical government work. The conflict escalated after Pentagon official Emil Michael denounced Anthropic CEO Dario Amodei, while employees from OpenAI and Google signed a solidarity letter. The outcome will set a precedent for how AI giants govern the military application of powerful models like GPT-4 and Claude 3.5.

Key Points
  • Altman's memo bans AI for mass surveillance and autonomous lethal weapons, mirroring Anthropic's rejected Pentagon terms.
  • OpenAI still seeks a classified military deal for ChatGPT, but with strict guardrails excluding unlawful/offensive uses.
  • The stand complicates Pentagon plans to replace Claude and sets a precedent for industry-wide AI ethics in government contracts.

Why It Matters

This collective ethical stance by AI leaders could redefine government contracts and set binding precedents for responsible military AI deployment.