7 views 16 mins 0 comments

Anthropic’s Lawsuit Should Absolutely Destroy the Pentagon in Court

In Politics
March 12, 2026
Anthropic’s Lawsuit Should Absolutely Destroy the Pentagon in Court




Politics


/
March 11, 2026

But make no mistake: The company is not one of the good guys.

Anthropic’s Lawsuit Should Absolutely Destroy the Pentagon in Court

Anthropic CEO Dario Amodei, Chief Product Officer Mike Krieger and Head of Communications Sasha de Marigny give a press conference on May 22, 2025.

(Julie Jammot / AFP via Getty Images)

Anthropic, makers of the “Claude” AI model, has sued the Department of Defense in two separate lawsuits, including one alleging that the government is violating its First Amendment rights. The conflict arose last week when the Trump administration labeled the company a “supply chain risk” and banned government agencies, or any entity working with the US military, from using the Claude system. The Trump administration now calls Claude a national security risk. (The second lawsuit takes issue with this designation, which, until now, has never been used against a US company.)

The blacklisting followed months of fighting between Anthropic and the government. Anthropic wants to keep “safeguards” on Claude that prevent the system from being used to power autonomous weapons—basically, killing machines that can conduct military operations without human involvement—and to engage in widespread surveillance of Americans. The Trump administration wants the company to loosen those safeguards. Evidently, Secretary of War Crimes Pete Hegseth wants the killer robots now, and he doesn’t like Anthropic getting in his way.

The government repeatedly threatened Anthropic with consequences if it didn’t remove its safety restrictions. It would appear the supply chain risk designation and associated blacklisting are those consequences.

All of this should make the Anthropic lawsuit a slam dunk, at least the First Amendment part, assuming there are still judges and justices willing to hold the Trump administration accountable to the Constitution, even in the realm of national security. Anthropic’s complaint makes a pretty clear cut case for a First Amendment violation (I’m less knowledgeable about the other claim, though my assumption, based on prior history, is that the Trump administration is indeed in violation of every law it’s accused of violating).

The simple facts are these: The government wanted Anthropic to make its AI do something. Anthropic didn’t want to make its AI do it, because of its beliefs, and those beliefs are protected under the First Amendment. The government punished Anthropic with an adverse national security designation, because the company wouldn’t do what the government wanted. That is a free speech violation.

It would have been one thing if the government simply decided to use another AI provider or, heaven forbid, stopped using AI for military purposes. That wouldn’t violate the First Amendment; it would simply be the government opting to use a different service. But the government didn’t merely take its business elsewhere—it decided to punish Anthropic by declaring it a national security threat.

Current Issue

Cover of April 2026 Issue

As happens so often, Donald Trump’s chronic inability to keep his mouth shut even when he is violating the Constitution should help make Anthropic’s case for it. On social media, he called Anthropic “out-of-control” and a “RADICAL LEFT, WOKE COMPANY” of “Leftwing nut jobs.” He’s not saying that the company is no longer able to provide a useful service to the government; he’s saying the government is blacklisting the company for its political views.

Hegseth doubled down on these comments. According to the complaint, when Hegseth issued the blacklist order, he “denounced what he characterized as Anthropic’s ‘Silicon Valley ideology,’ ‘defective altruism,’ ‘corporate virtue-signaling,’ and ‘master class in arrogance.’ And he criticized Anthropic for not being ‘more patriotic.’”

All of that violates the First Amendment. The DOD can use any service provider it wants, but it can’t give a company an adverse legal designation for lack of “patriotism.” Punishing people for insufficiently waving the flag is one of those things the First Amendment was designed to stop.

There is recent case law, from the Trump-controlled Supreme Court no less, that should help Anthropic’s case as well. In National Rifle Association v. Vullo, the NRA successfully argued that the superintendent of the New York State Department of Financial Services, Maria Vullo, had pressured banks and insurance companies to cease doing business with the NRA and other pro-gun groups in the wake of the Sandy Hook shooting. The Supreme Court ruled that this violated the NRA’s First Amendment rights, essentially saying that New York State was using its power to take business away from the NRA because New York didn’t like what the NRA stands for.

That ruling was 9–0, by the way. The unanimous opinion was written by Justice Sonia Sotomayor, who is not exactly on the ammosexual side of the spectrum. But: Trying to crush a business because the government doesn’t like what the business does is a textbook violation of the First Amendment. I assume the justices who treat Trump as God on national security issues (Chief Justice John Roberts and Justices Clarence Thomas, Sam Alito, and alleged attempted rapist Brett Kavanaugh) will find some way to walk back their views from Vullo and decide that the First Amendment doesn’t matter when Trump wants your company to automate killing people, but that still only gets the Trump administration to four votes.

Anthropic should win, but, here’s the thing: It’s not exactly one of the good guys. Yes, the current crop of war criminals running the government wants horrible things, but Anthropic mostly wants to provide them. It’s not, after all, like it didn’t seek out the $200,000 billion worth of contracts the government is now trying to take away. And the company’s leaders have been falling all over themselves to talk about how “patriotic” they are, and how much they believe in using AI for national security. They’re basically saying they’ll let Claude do anything other than pull the actual trigger:

Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.

The company wants to help the Trump administration do almost all of the bad things the Trump administration wants to do. And it’s happy to play along in ways both big and very small (see its repeated, ingratiating references to the “Department of War”).

Here’s my read: I feel like Anthropic is just trying to keep plausible deniability for when, inevitably, its system is used in the most obviously egregious way. Just think of it this way: When Claude kills the “wrong” person (or, more likely, village full of people) the lawsuit isn’t going to just come at the US government; it’s going to be company-wrecking litigation filed against Anthropic as well. And I will bet all of Claude’s venture capital funding that the government will try to blame any violent mishaps on Anthropic and not the guys drunkenly running the DOD. All of their rhetoric and safety protocols about what Claude should not be used for strikes me as an early warning liability shield more than anything else.

Anthropic strikes me as the guys who split the atom and then said, “But, we’re only going to use this for science, not to make… bombs that could destroy all of human civilization, right? Right, Robbie Oppenheimer?” Like, sure, you can want your technology to “only be used for good,” but… that’s not how technology works. And it’s definitely not how the US war machine works.

The best thing to happen would be for the DOD to be prevented from using autonomous lethal AI and from surveilling the American public by an act of Congress, not through the defense of Anthropic’s First Amendment rights. This situation cries out for legislation, not a 5–4 Supreme Court ruling about whether the government can blacklist companies that won’t do its bidding.

The Trump administration shouldn’t be able to list a company as a national security threat because it won’t make terminators. But while Anthropic (for now) doesn’t want its technology to be used this way, the next company won’t have a problem with it. OpenAI, makers of ChatGPT, are already trying to fill the void left by Claude.

Eventually we’ll be told that we simply have to make autonomous killing robots because the Chinese or the Russians or the Klingons are already doing it and we can’t fall behind.

As usual, Terminator 2 predicted all of this.

John Connor: “We’re not gonna make it, are we? People, I mean.”

Terminator: “It’s in your nature to destroy yourselves.”

Even before February 28, the reasons for Donald Trump’s imploding approval rating were abundantly clear: untrammeled corruption and personal enrichment to the tune of billions of dollars during an affordability crisis, a foreign policy guided only by his own derelict sense of morality, and the deployment of a murderous campaign of occupation, detention, and deportation on American streets. 

Now an undeclared, unauthorized, unpopular, and unconstitutional war of aggression against Iran has spread like wildfire through the region and into Europe. A new “forever war”—with an ever-increasing likelihood of American troops on the ground—may very well be upon us.  

As we’ve seen over and over, this administration uses lies, misdirection, and attempts to flood the zone to justify its abuses of power at home and abroad. Just as Trump, Marco Rubio, and Pete Hegseth offer erratic and contradictory rationales for the attacks on Iran, the administration is also spreading the lie that the upcoming midterm elections are under threat from noncitizens on voter rolls. When these lies go unchecked, they become the basis for further authoritarian encroachment and war. 

In these dark times, independent journalism is uniquely able to uncover the falsehoods that threaten our republic—and civilians around the world—and shine a bright light on the truth. 

The Nation’s experienced team of writers, editors, and fact-checkers understands the scale of what we’re up against and the urgency with which we have to act. That’s why we’re publishing critical reporting and analysis of the war on Iran, ICE violence at home, new forms of voter suppression emerging in the courts, and much more. 

But this journalism is possible only with your support.

This March, The Nation needs to raise $50,000 to ensure that we have the resources for reporting and analysis that sets the record straight and empowers people of conscience to organize. Will you donate today?

Elie Mystal



Elie Mystal is The Nation’s justice correspondent and a columnist. He is also an Alfred Knobler Fellow at the Type Media Center. He is the author of two books: the New York Times bestseller Allow Me to Retort: A Black Guy’s Guide to the Constitution and Bad Law: Ten Popular Laws That Are Ruining America, both published by The New Press. You can subscribe to his Nation newsletter “Elie v. U.S.” here.





Source link