Media & Culture

OpenAI’s head of robotics resigns over Pentagon deal, warning about surveillance and lethal autonomy

Caitlin Kalinowski quit after OpenAI's rapid defense contract, warning of unchecked surveillance and lethal autonomous weapons.

Deep Dive

OpenAI's head of robotics, Caitlin Kalinowski, has resigned in protest over the company's recently announced agreement with the U.S. Department of Defense. Kalinowski, who led efforts on robotics and physical AI systems, stated her departure was driven by governance concerns, specifically the rushed nature of the deal announcement. She warned it lacked defined guardrails against two critical risks: the surveillance of Americans without judicial oversight and the development of lethal autonomous weapons systems without human authorization.

Her resignation underscores a significant rift within the tech industry regarding AI's role in national security. The timing is particularly notable, as OpenAI's deal was announced just hours after reports surfaced that rival AI firm Anthropic had refused to authorize broad military uses of its models—a decision that reportedly led the U.S. government to designate Anthropic as a supply chain risk. Kalinowski emphasized her issue was not with defense partnerships in principle, but with the accelerated decision-making process that bypassed necessary deliberation on profound ethical lines.

In response to the controversy, OpenAI CEO Sam Altman has stated the contract will be adjusted to prevent the use of its models for domestic surveillance of U.S. citizens. However, the incident highlights the escalating tension between AI companies racing to deploy advanced technology and the ethical frameworks required to govern its most consequential applications, especially in robotics and autonomous systems with direct military potential.

Key Points
  • OpenAI Robotics Lead Caitlin Kalinowski resigned over a rushed Pentagon AI deal, citing governance failures.
  • She specifically warned against surveillance of Americans without oversight and lethal autonomous weapons without human control.
  • The resignation came hours after reports that Anthropic refused similar military use, triggering a government 'supply chain risk' designation.

Why It Matters

Highlights a critical ethical schism in AI governance as powerful models move into military and surveillance applications.