

A federal judge in San Francisco called the Pentagon’s ban on Anthropic’s AI tools “troubling” during a hearing on Tuesday.
Anthropic is fighting the Pentagon’s designation of its Claude AI model as a national security supply chain risk. The company sued in federal court after President Trump ordered federal agencies to stop using its products, The Hill reports. This followed Anthropic’s refusal to let the Pentagon deploy Claude for autonomous lethal weapons or mass surveillance of Americans.
Trump labeled Anthropic a “radical left, woke company” putting troops at risk and gave the Pentagon six months to phase it out. Defense Secretary Pete Hegseth and others demanded “all lawful” uses, but Anthropic held firm on safety limits.
Judge Questions the Ban
U.S. District Judge Rita Lin pressed both sides. She asked if the government was punishing Anthropic for public criticism of its contracting stance and if the restrictions fit national security needs.
“I don’t know if it’s murder, but it looks like an attempt to cripple Anthropic,” Lin said, per MS Now. She added the limits “don’t really seem to be tailored to the stated national security concern.”
Lin expects a ruling soon, though Anthropic wants one by March 26. The company seeks to block the designation while it challenges the decision, claiming First Amendment and due process violations.
Government’s Defense
Justice Department lawyers say the issue is Anthropic’s contract behavior, not speech. They argue the firm poses a risk because it could disable or alter Claude during operations if its “red lines” are crossed. A memo from Defense Undersecretary Emil Michael backs this, as detailed in ABC7.
The outcome could affect how the government buys AI and who sets usage rules.