Congress Draws a Line on AI Weapons After Anthropic-Pentagon Standoff
Senate Democrats introduce bills to ban autonomous AI weapons and mass surveillance after the Pentagon blacklisted Anthropic. What the legislation says and why it matters.

What happens when the Pentagon blacklists an American AI company for refusing to build autonomous weapons? Apparently, Congress starts writing laws. Over the past two weeks, Senate Democrats have introduced a flurry of legislation aimed at putting hard limits on how the military can use artificial intelligence — and the catalyst was the Anthropic-Pentagon confrontation that rocked the industry in early March.
The Chain of Events
It started with a contract negotiation. In February, Anthropic pushed back against the Pentagon's terms for deploying Claude models on classified networks, insisting on two red lines: no autonomous weapons without human authorization, and no mass surveillance of American citizens. The Defense Department refused.
What happened next was unprecedented. The Pentagon designated Anthropic — a San Francisco-based company, not a foreign adversary — as a "supply chain risk." It was a label historically reserved for entities like Huawei, never before applied to a domestic AI firm. Anthropic sued in federal court, and the AI industry split.
OpenAI stepped into the vacuum, signing its own deal with the Pentagon. The optics were not great. Sam Altman later admitted the agreement looked "opportunistic and sloppy." And then OpenAI's own robotics lead resigned over the deal's ethical implications — a story that became its own headline.
What the Bills Actually Say
Three separate pieces of legislation emerged from the fallout, each targeting a different angle.
The AI Guardrails Act, introduced by Sen. Elissa Slotkin (D-MI) on March 17, is the most concrete. It establishes three prohibitions: no firing autonomous weapons to kill without human authorization, no AI-driven mass surveillance of Americans, and no AI involvement in nuclear weapons launches. The safeguards cover the full AI lifecycle — development, testing, deployment, and post-deployment monitoring. There's an escape valve: the Defense Secretary can override the limits in extraordinary circumstances, but must notify Congress.
"Congress is behind in putting left and right limits on the use of AI, and the first place to start should be at the Pentagon." — Sen. Elissa Slotkin
Sen. Adam Schiff (D-CA) is drafting a companion bill that mandates "meaningful human control" over AI systems in combat. It draws on Biden-era frameworks and may be attached to the National Defense Authorization Act. Schiff has been blunt about what the Anthropic situation revealed:
"Whenever a technology has the capability of taking a human life, there needs to be a human operator in the chain of command. We don't want to delegate that kind of responsibility over life and death to an algorithm."
Separately, Sen. Bernie Sanders and Rep. Alexandria Ocasio-Cortez introduced the AI Data Center Moratorium Act on March 25, pausing new data center construction until federal AI safeguards are in place. It's a different bill with a different scope, but it's part of the same wave of Democratic pushback against unchecked AI deployment.
The Bigger Picture
Cornell Law professor Michael C. Dorf made a sharp observation: the Pentagon's refusal to accept Anthropic's conditions implies the government may actually intend to conduct mass surveillance and deploy autonomous weapons. Otherwise, why not simply agree to the restrictions?
Human Rights Watch warned that autonomous weapons risk placing civilians in grave danger because such systems cannot reliably distinguish between combatants and civilians. The organization has been pushing for international regulation, but domestic legislation like Slotkin's bill represents a more immediate path.
The industry itself is fractured. OpenAI initially adopted similar red lines in its own Pentagon contract — no mass surveillance, no autonomous weapons, no social credit systems — but critics point out that these are voluntary commitments, not legal requirements. Schiff said he would have "far more confidence in statutory requirements" than in relying on the goodness of any company or the lawfulness of the Pentagon.
In the House, Rep. Sam Liccardo (D-CA) tried a different approach: an amendment to the Defense Production Act that would prohibit federal agencies from retaliating against AI vendors that limit their technology. It failed on a party-line vote.
What Happens Next
The political math is not encouraging for the bills' supporters. Democrats are in the minority in both chambers, and some lawmakers worry the legislation will be seen as implicit criticism of the Trump administration. Midterm elections narrow the window further.
But the Anthropic lawsuit is still working through federal court. On March 24, U.S. District Judge Lin questioned whether Anthropic was being punished and whether the DOD violated the law. A ruling in Anthropic's favor could accomplish through the judiciary what Congress may struggle to do through legislation.
Meanwhile, the AI arms race between the U.S. and China continues to accelerate. Slotkin framed her bill not as anti-military but pro-competitiveness: "We must win the AI race against China. But to do that, we need action that puts limits on AI in the Department of Defense." The question is whether guardrails and a sprint can coexist — or whether one will inevitably slow the other.


