‘Uncanny Valley’: Pentagon vs. ‘Woke’ Anthropic, Agentic vs. Mimetic, and Trump vs. State of the Union
Claude-maker clashes with Defense Department over bans on autonomous weapons and domestic surveillance.
The latest episode of WIRED's Uncanny Valley podcast details a brewing conflict between AI company Anthropic and the U.S. Department of Defense over a $200 million contract awarded last summer. Anthropic, creator of the Claude AI models, has established strict ethical guardrails prohibiting military use cases like fully autonomous weapons systems and domestic surveillance—restrictions the Pentagon reportedly finds overly restrictive. This clash represents a fundamental disagreement about how advanced AI should be integrated into national security operations, with Anthropic's "woke" principles facing off against practical military requirements.
The podcast hosts—Zoë Schiffer, Brian Barrett, and Leah Feiger—analyze how this standoff reflects broader Silicon Valley tensions between "agentic" versus "mimetic" approaches to AI development. While Anthropic advocates for agentic AI (systems that can take independent actions within ethical boundaries), military applications often prioritize mimetic capabilities (replicating human decision-making in combat scenarios). The episode also covers State of the Union AI policy discussions and farewells the retiring TAT-8 undersea cables that pioneered global internet connectivity, framing current AI debates within historical tech infrastructure contexts.
- Anthropic's $200M Pentagon contract includes bans on fully autonomous weapons and domestic surveillance use
- Defense Department pushes back against "woke" restrictions, seeking broader military AI applications
- Podcast frames conflict as "agentic" (ethical boundaries) vs "mimetic" (human replication) AI development approaches
Why It Matters
Sets precedent for how ethical AI guardrails will withstand government pressure in $200B defense tech market.