Tech Insider

The Anthropic AI logo appears on a smartphone screen and as the background on a laptop computer screen in this photo illustration in Athens, Greece, on February 24, 2026.
Here's what smart people are saying about Anthropic versus the Pentagon.
  • Anthropic isn't going to let the Defense Department use Claude as it pleases.
  • CEO Dario Amodei stood firm, despite Defense Secretary Pete Hegseth's ultimatum this week.
  • Here's what smart people are saying about Anthropic's fight with the Pentagon.

Anthropic has drawn a hard boundary with the Pentagon. CEO Dario Amodei said on Tuesday that the company "cannot in good conscience accede" to the Defense Department's request that it agree to the military's terms for the use of its model, Claude.

Amodei's blog post came after Defense Secretary Pete Hegseth gave the company an ultimatum: Cave and cooperate with the military on Claude, or be blacklisted.

Here's what smart people are saying about Anthropic's public spat with the Pentagon.

Jack Shanahan

Shanahan, a former USAF Lt. Gen., served for 36 years in the military and worked on the Pentagon's AI efforts. He is now a senior fellow at the Center for a New American Security, where he consults on AI for national security.

Shanahan said Anthropic isn't "trying to play cute here" — and that it's "committed to helping the government."

In his words:

No LLM, anywhere, in its current form, should be considered for use in a fully lethal autonomous weapon system. It's ludicrous even to suggest it (and at least in theory, DoDD 3000.09 wouldn't allow it without sufficient human oversight). So making this a company redline seems reasonable to me.

Despite the hype, frontier models are not ready for prime time in national security settings. Over-reliance on them at this stage is a recipe for catastrophe.

Mass surveillance of US citizens? No thanks. Seems like a reasonable second redline.

That's it. Those are the two showstoppers. Painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end.

Why not work on what kind of new governance is needed to ensure secure, reliable, predictable use of all frontier models, from all companies? This is a shared government-industry challenge, demanding a shared government-industry (+ academia) solution.

This should never have become such a public spat. Should have been handled quietly, behind the scenes. Scratching my head over why there was such a misunderstanding on both sides about terms & conditions of use. Something went very wrong during the rush to roll out the models.
Palmer Luckey

Luckey, who founded and runs the weapons and defense software startup Anduril, pointed to a previous White House decision to compel private companies to work with the US government.

Posting on X several hours before Amodei's statement, he referenced a 1948 statement by then-President Harry S. Truman that ordered railway companies to allow the US military to run their operations amid a workers' strike.

Later that evening, Luckey said that Silicon Valley executives should not influence military policy.

"This idea, that military policy must be in the hands of elected leaders vs corporate executive, is a foundational principle for Anduril," he wrote.

Since Luckey founded Anduril in 2017, the company has clinched a rapidly growing list of Defense Department contracts, providing drone, counter-drone, and AI software systems to the military.

Anduril is also in the running to build the US Air Force's Collaborative Combat Aircraft, which are drone wingmen for crewed fighter jets.

Dean Ball

Former Trump administration AI advisor Dean Ball did not mince words in an interview with Politico.

"You're telling everyone else who supplies to the DOD you cannot use Anthropic's models, while also saying that the DOD must use Anthropic's models," Ball told Politico.

Ball, who helped write Trump's AI Action Plan, added that it was "a whole different level of insane" for the Pentagon to "do both of those things" — designate Anthropic a security risk, while making the case for how essential Anthropic is for military AI.

Michael McFaul

McFaul, a political science professor at Stanford University and director of its Freeman Spogli Institute for International Studies, praised Amodei's statement as "strong, principled, and very reasonable."

"Bravo," wrote McFaul, who was also the US ambassador to Russia from 2012 to 2014.

Thomas Wright

"Many firms folded for a lot less money than what Anthropic stands to lose here," wrote Wright in a social media post on Thursday evening.

Wright was the senior director for strategic planning at the US National Security Council for the Biden administration.

As a top-ranking member of the council, he was a key architect of the 2022 US National Security Strategy, which was the framework for the administration's defense priorities and threat assessments.

Read the original article on Business Insider