Trump-appointed judges refuse to block Trump blacklisting of Anthropic AI tech
A panel of Trump-appointed judges refused to halt the ban, calling it a 'victory for military readiness.'
A federal appeals court has delivered a significant, though temporary, setback to AI company Anthropic in its legal battle against a Trump-era blacklisting. The U.S. Court of Appeals for the District of Columbia Circuit, in a 2-1 decision written by judges appointed by former President Trump, denied Anthropic's emergency motion for a stay. The ruling allows the federal ban—which prohibits all agencies and military contractors from using Anthropic's technology—to remain in effect while the court expedites the case for oral arguments on May 19. The judges acknowledged Anthropic would suffer "some degree of irreparable harm," primarily financial, but sided with the government's argument that a stay would force the military "to prolong its dealings with an unwanted vendor" during an ongoing conflict.
The legal battle centers on whether the blacklisting was a lawful national security measure or unconstitutional retaliation. The Trump administration labeled Anthropic a "Supply-Chain Risk to National Security" after the company refused to let its Claude AI models be used for autonomous warfare and mass surveillance of Americans, which it claims is protected First Amendment activity. Acting Attorney General Todd Blanche hailed the D.C. Circuit's decision as "a resounding victory for military readiness." However, this ruling conflicts directly with a March decision from a California federal court, where a Biden-appointed judge granted Anthropic a preliminary injunction, finding the blacklisting likely violated the First Amendment. This creates a high-stakes legal split that may ultimately require Supreme Court intervention, pitting corporate speech rights against executive authority in national security.
- A D.C. Circuit panel of two Trump-appointed judges denied Anthropic's emergency motion to stay its federal blacklisting, expediting the case for a May 19 hearing.
- The government's ban, issued after Anthropic refused military use of Claude AI for autonomous warfare, is defended as a national security necessity but challenged as First Amendment retaliation.
- The ruling creates a direct split with a California court's March injunction in Anthropic's favor, setting up a potential Supreme Court battle over AI, speech, and executive power.
Why It Matters
This case sets a critical precedent for how far the government can go in restricting AI companies based on their ethical policies and could redefine the balance between national security and corporate free speech.