OpenAI Had Banned Military Use. The Pentagon Tested Its Models Through Microsoft Anyway
Despite a 2023 ban, the Pentagon accessed OpenAI's AI through Microsoft's Azure service, sparking internal debate.
A WIRED investigation reveals a significant shift in OpenAI's stance on military applications. Despite a 2023 usage policy that explicitly banned military use, the Pentagon had already begun testing OpenAI's models through Microsoft's Azure OpenAI Service that same year. Microsoft, as OpenAI's largest investor with broad commercialization rights, made the service available to the US government under its own terms, a move OpenAI and Microsoft state was not subject to OpenAI's original policies. This testing occurred before OpenAI formally updated its policies in January 2024 to remove the blanket military ban, a change some employees learned about through media reports.
This policy evolution has created internal tension, with employees expressing confusion and concern over the company's direction. Following the policy change, OpenAI announced a partnership with defense tech firm Anduril for "national security missions," which it described as limited to unclassified work. This contrasts with a rejected proposal from Palantir for classified military use via its "FedStart" program, which OpenAI deemed too high-risk. The situation highlights the complex ethical and operational challenges AI companies face as powerful models move into sensitive government and defense sectors, balancing commercial opportunities with stated safety principles.
- The Pentagon tested OpenAI models via Microsoft Azure in 2023, despite OpenAI's own policy banning military use at the time.
- OpenAI removed its blanket military ban in January 2024 and later signed a national security partnership with defense contractor Anduril.
- Internal employee concerns persist, with some arguing the models are too unreliable for high-stakes military applications.
Why It Matters
This highlights the ethical gray areas and policy enforcement challenges as foundational AI models become integral to national security.