AI Safety

Monthly Roundup #39: February 2026

Military AI contracts in jeopardy as models refuse unpredictable tasks, sparking supply chain chaos.

Deep Dive

Zvi's February 2026 LessWrong roundup reveals Anthropic's defense contract with the Pentagon is failing. The core issue: LLMs like Claude are probabilistic and cannot guarantee absolute compliance with military orders, unlike predictable tools. This has triggered a potential "supply chain risk" designation that could cripple defense tech. OpenAI and Google stand ready as replacements, but the situation exposes fundamental mismatches between AI capabilities and military command requirements.

Why It Matters

Highlights the real-world limits of AI reliability for critical national security applications, forcing a strategic rethink.