The Download: AI health tools and the Pentagon’s Anthropic culture war
A judge blocks the Pentagon's supply chain risk label for Anthropic, while medical chatbots from Microsoft and OpenAI face evaluation concerns.
The latest edition of MIT Technology Review's The Download highlights critical tensions in AI deployment across healthcare and government. In healthcare, a surge of AI medical chatbots from tech giants like Microsoft, Amazon, and OpenAI promises to address access issues within the strained medical system. However, experts like Grace Huckins report significant concerns about the lack of rigorous, external evaluation these tools undergo before public release, questioning their real-world efficacy and safety despite clear market demand.
In a separate defense sector controversy, the Pentagon's attempt to label AI company Anthropic a supply chain risk and ban its use by government agencies has been temporarily blocked by a judge. As James O'Donnell reports, the judge's intervention indicates the dispute escalated unnecessarily because the government bypassed existing formal processes. The situation was further inflamed by the Pentagon's decision to engage in the conflict on social media, a tactic that ultimately backfired and highlighted procedural failures in handling emerging technology risks.
- Medical AI chatbots from Microsoft, Amazon, and OpenAI are launching with minimal external safety evaluation, raising red flags.
- A judge blocked the Pentagon's order to stop using Anthropic AI, calling its 'supply chain risk' label procedurally flawed.
- The Pentagon's strategy of taking the Anthropic feud to social media is cited as a key reason the conflict spiraled.
Why It Matters
These stories underscore the urgent need for robust evaluation frameworks for public-facing AI and proper governance for national security tech disputes.