The artificial intelligence lab argued that the Trump administration was punishing it for speaking about the risks of its technology.
https://www.washingtonpost.com/technology/2026/03/26/pentagon-anthropic-national-security-risk-order-blocked/
A federal judge in San Francisco blocked a Pentagon order Thursday labeling the artificial intelligence company Anthropic a national security risk, saying officials had likely violated the law and retaliated against the firm for speaking publicly about how it wanted its technology to be used.
Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government, District Court Judge Rita F. Lin wrote.
The immediate practical implications of the ruling are unclear, but it represents a clear victory for the AI lab, which has been involved in a bitter power struggle with the Defense Department over the use of its Claude system by the military. Defense officials pushed the company to allow for the technology to be used for any lawful purpose, but Anthropic wanted a bar on it being used in mass domestic surveillance and to power fully autonomous weapons......
Anthropic argued in court that the government was overstepping its legal authority and punishing the company for exercising its rights to speak about its technologys risks. The company said in legal filings that the administrations actions had made customers wary, even those with no ties to the federal government, and could cost it billions in future revenue.
Lin wrote that her order does not prevent the Pentagon from choosing to stop doing business with Anthropic, but she barred the Trump administration from taking broader steps against the company. Her ruling is not the final say because a separate case related to a different law is playing out in another federal court in Washington. A panel of judges handling that case has yet to issue a ruling.