Anthropic says it was blacklisted for opposing autonomous weapons, mass surveillance.
Credit: Getty Images | picture alliance
Anthropic sued the Trump administration yesterday in an attempt to reverse the government’s decision to blacklist its technology. Anthropic argues that it exercised its First Amendment rights by refusing to let its Claude AI models be used for autonomous warfare and mass surveillance of Americans and that the government blacklisted it in retaliation.
“When Anthropic held fast to its judgment that Claude cannot safely or reliably be used for autonomous lethal warfare and mass surveillance of Americans, the President directed every federal agency to ‘IMMEDIATELY CEASE all use of Anthropic’s technology’—even though the Department of War had previously agreed to those same conditions,” Anthropic said in a lawsuit in US District Court for the Northern District of California. “Hours later, the Secretary of War [Pete Hegseth] directed his Department to designate Anthropic a ‘Supply-Chain Risk to National Security,’ and further directed that ‘effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.’”
Anthropic said the First Amendment gives it “the right to express its views—both publicly and to the government—about the limitations of its own AI services and important issues of AI safety.” Anthropic further argued that the process for designating it a supply chain risk did not comply with the procedures mandated by Congress. The supply chain risk designation is supposed to be used only to protect against risks that an adversary may sabotage systems used for national security, the lawsuit said.
Trump’s directive “requiring every federal agency to immediately cease all use of Anthropic’s technology, and actions taken by other defendants in response to that directive, are outside any authority that Congress has granted the Executive,” and violate the Fifth Amendment’s due process clause, Anthropic said.
Anthropic’s lawsuit was filed against Hegseth, the Department of War (previously called the Department of Defense), and numerous other federal agencies. Anthropic also filed a motion for preliminary injunction and a second lawsuit asking for review in the US Court of Appeals for the District of Columbia Circuit.
White House: Anthropic is “radical left, woke company”
The Pentagon declined to comment. The White House responded by calling Anthropic a “radical left” and “woke” firm.
“President Trump will never allow a radical left, woke company to jeopardize our national security by dictating how the greatest and most powerful military in the world operates,” a White House spokesperson said in a statement provided to Ars. “The President and Secretary of War are ensuring America’s courageous warfighters have the appropriate tools they need to be successful and will guarantee that they are never held hostage by the ideological whims of any Big Tech leaders. Under the Trump Administration, our military will obey the United States Constitution—not any woke AI company’s terms of service.”
A brief supporting Anthropic was filed in the California federal court by the Foundation for Individual Rights and Expression, the Electronic Frontier Foundation, the Cato Institute, the Chamber of Progress, and the First Amendment Lawyers Association. The groups said that Pentagon retaliation against Anthropic will “silence future speech from those who fear the government attempting to harm their business or extinguish it entirely.”
Calling the government’s actions “transparently retaliatory and coercive,” the advocacy groups wrote that the court “need not guess at the government’s retaliatory motives because the Pentagon has already announced them… Until recently, it was rare for government leaders to so openly and proudly boast about retaliating against someone for their protected speech. Now it is commonplace. Evidently only those who agree to be complicit in this administration’s assertion of unfettered power are safe.”
Google and OpenAI staff support lawsuit
Another brief supporting Anthropic was filed by various technical, engineering, and research employees of Google and OpenAI. Google is an investor in Anthropic. The Google and OpenAI employees wrote that “mass domestic surveillance powered by AI poses profound risks to democratic governance—even in responsible hands.” On the topic of autonomous weapon systems, they wrote that “current AI models are not reliable enough to bear the responsibility of making lethal targeting decisions entirely alone, and the risks of their deployment for that purpose require some kind of response and guardrails.”
The Google and OpenAI employees said that in using the supply chain risk designation “in response to Anthropic’s contract negotiations, [the Pentagon] introduces an unpredictability in our industry that undermines American innovation and competitiveness. It chills professional debate on the benefits and risks of frontier AI systems and various ways that risks can be addressed to optimize the technology’s deployment.”
Anthropic CEO Dario Amodei explained the company’s objections to certain AI uses in a February 26 post. “We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values,” he wrote.
Current law allows the government to “purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant,” and “AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale,” Amodei wrote.
CEO: Autonomous weapons too risky
Amodei expressed support for partially autonomous weapons like those used in Ukraine, but not for fully autonomous weapon systems “that take humans out of the loop entirely and automate selecting and engaging targets.” He said that fully autonomous weapons “may prove critical for our national defense” eventually but that AI is not yet reliable enough to power them.
“We will not knowingly provide a product that puts America’s warfighters and civilians at risk,” he wrote. “We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. In addition, without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don’t exist today.”
Trump responded with a Truth Social post on February 27. “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution,” Trump wrote. “Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.”
Hegseth then wrote that “Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon.” Hegseth said the military “must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.”
Anthropic said later that day that it had engaged in months of negotiations with the government and would challenge any supply chain risk designation in court. “Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company… No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” Anthropic said.
