Judge Blocks Pentagon's 'Supply Chain Risk' Label on Anthropic
A federal judge in San Francisco barred the Department of Defense from designating Anthropic as a supply chain risk, calling the move 'arbitrary and capricious.'

"The Department of War provides no legitimate basis to infer from Anthropic's forthright insistence on usage restrictions that it might become a saboteur." That's not a legal blogger or an industry analyst — that's a federal judge, writing about the United States government's treatment of one of the country's leading AI companies.
What Happened
On March 26, Judge Rita Lin of the US District Court in San Francisco granted a preliminary injunction barring the Department of Defense from labeling Anthropic — the maker of Claude — as a "supply chain risk." The designation, which Lin called "likely both contrary to law and arbitrary and capricious," had triggered a cascade of consequences: Claude was being pulled from use across federal agencies, and Anthropic's commercial reputation was taking damage.
The conflict traces back to a straightforward disagreement. The Pentagon had been using Claude for writing sensitive documents and analyzing classified data. But when Anthropic insisted on maintaining usage restrictions for military applications — guardrails the Trump administration considered unnecessary — the relationship soured. The DOD, which under the current administration refers to itself as the "Department of War," determined that Anthropic "could not be trusted" and began severing ties.
What followed went beyond a contract dispute. The administration issued several directives, including the supply chain risk designation, that were slowly choking off Claude's presence across the entire federal government. Anthropic responded by filing two lawsuits challenging the sanctions as unconstitutional.
In a prior hearing, Judge Lin had already signaled her thinking, saying the government appeared to be illegally trying to "cripple" and "punish" the company. In her written ruling, she went further, calling the designation a "classic First Amendment" issue — the government retaliating against a company for exercising its right to set terms on how its technology gets used.
The injunction restores the status quo to February 27, before the directives were issued, though the order won't take effect for another week. A second Anthropic lawsuit, filed in DC appeals court, remains pending.
Why This Matters
The case sits at an uncomfortable intersection of national security, corporate autonomy, and the First Amendment. The Pentagon's argument boiled down to: if a company won't give us unrestricted access to its AI, that company is a risk. The court rejected that logic flatly.
The ruling also drew unexpected solidarity from across the industry. Workers at OpenAI and Google filed an amicus brief supporting Anthropic — a striking move given these companies compete directly for the same government contracts. The implication is clear: if the government can punish one AI company for maintaining safety restrictions, any of them could be next.
For Anthropic, the stakes extend well beyond government revenue. The company has built its brand around responsible AI development, including careful guardrails on how its models are deployed. Having the Pentagon officially label that caution as a national security threat would undermine the entire premise. Meanwhile, the broader AI industry is grappling with its own questions about where safety boundaries should be drawn.
What's Next
The injunction is preliminary — it preserves the status quo while the case moves forward, but doesn't resolve the underlying legal questions. The DC appeals court case could add another layer of complexity, and the administration may appeal Lin's order. For now, though, Anthropic has won the first round, and the message from the bench is unambiguous: the government can't weaponize procurement designations to punish a company for having principles.


