Key Takeaways:

  • U.S. District Judge Rita Lin issued a preliminary injunction on March 26 blocking the Trump administration from labeling Anthropic a supply chain risk or banning federal agencies from using its Claude AI models.

  • The judge called the designation "classic illegal First Amendment retaliation" after Anthropic refused to allow the Pentagon unrestricted use of Claude for autonomous weapons or mass surveillance of Americans.

  • The supply chain risk statute has historically been reserved for foreign adversaries and terrorists, making Anthropic the first American company to receive the designation.

A federal judge in San Francisco blocked the Trump administration on March 26 from banning Anthropic's AI technology across the federal government, issuing a 43-page ruling that called the Pentagon's actions likely illegal.

Judge Rita Lin granted Anthropic's request for a preliminary injunction, pausing both the Pentagon's supply chain risk designation and President Trump's directive ordering all federal agencies to stop using Claude. The order is stayed seven days to allow appeal.

The dispute began over a contract negotiation. Anthropic signed a $200 million contract with the Pentagon in July 2025, but when the Defense Department wanted unrestricted access to Claude across all lawful purposes, Anthropic wanted assurances the technology would not be used for fully autonomous weapons or domestic mass surveillance. The Pentagon refused. Anthropic held its position. Then the retaliation started.

Defense Secretary Pete Hegseth declared Anthropic a supply chain risk on social media. Trump posted that federal agencies should immediately cease all use of Anthropic's technology. The Pentagon's under secretary of defense called Anthropic's CEO a liar with a God complex. The administration used a statute historically reserved for foreign intelligence agencies and terrorists to designate an American company.

Lin's ruling was direct. She wrote that nothing in the governing statute supports branding an American company a potential adversary for expressing disagreement with the government. She called the designation likely both contrary to law and arbitrary and capricious. She noted the Pentagon could simply stop using Claude if it objected to the terms, but instead chose measures that appeared designed to punish Anthropic.

The case continues. Anthropic has filed a second suit in Washington challenging the supply chain designation through formal administrative review. The preliminary injunction means the ban is paused, not resolved.

People Also Ask

Q: Why did the Pentagon ban Anthropic? A: Anthropic refused to grant the Pentagon unrestricted access to its Claude AI models, seeking assurances the technology would not be used for autonomous weapons or mass surveillance of Americans.

Q: What is a supply chain risk designation? A: A federal designation under 10 U.S.C. § 3252, historically used against foreign adversaries and terrorists, that bars Defense contractors from using the designated entity's products.

Q: What did the judge say about the Anthropic ban? A: Judge Rita Lin called the supply chain risk designation "Orwellian," said it was "classic illegal First Amendment retaliation," and found it likely contrary to law and arbitrary.

Q: Is Anthropic still banned from government work? A: No. The preliminary injunction pauses the ban while the case proceeds. The order is stayed seven days for potential government appeal.

Keep Reading