Microsoft, Google, Amazon say Anthropic Claude remains available to non-defense customers
Major cloud providers clarify that the DoD's 'supply-chain risk' designation doesn't block commercial access to Anthropic's models.
Microsoft, Google, and Amazon Web Services have moved swiftly to reassure their enterprise and startup customers that access to Anthropic's Claude AI models will continue uninterrupted for non-defense related workloads. This follows the U.S. Department of Defense's controversial decision to designate Anthropic as a 'supply-chain risk,' a label typically reserved for foreign adversaries, after the AI company refused to grant the Pentagon unrestricted access to its technology for applications like mass surveillance and fully autonomous weapons. The designation means the DoD itself cannot use Claude and requires its contractors to certify they aren't using Anthropic's models for defense contracts.
Microsoft confirmed its lawyers studied the determination and concluded Claude can remain available through products like Microsoft 365, GitHub, and its AI Foundry, excluding only the DoD itself. Google and AWS issued similar statements, clarifying the restriction applies solely to direct defense contract work. Anthropic CEO Dario Amodei vowed to fight the designation in court, arguing it is being misapplied. For the vast majority of commercial users on these major cloud platforms, this means business as usual for deploying Claude, even as the legal and ethical battle between the AI safety-focused startup and the U.S. government intensifies.
- The U.S. DoD designated Anthropic a 'supply-chain risk' after it refused tech access for mass surveillance and autonomous weapons.
- Microsoft, Google, and AWS confirmed the restriction only applies to direct DoD contracts, not their broader commercial customer bases.
- Anthropic CEO Dario Amodei is challenging the designation in court, calling its application to all customer relationships incorrect.
Why It Matters
Ensures enterprise AI roadmaps aren't disrupted, maintaining access to a leading LLM while highlighting the growing tension between AI ethics and government demands.