After launching a court battle against the Pentagon, artificial intelligence company Anthropic has found support from a company that's no stranger to legal fights: Microsoft.
On Tuesday, the tech giant – accused several times by regulators around the world of illegal monopolistic behaviour – filed a legal document in support of Anthropic, recently declared by the US to be a “supply chain risk to national security”.
The designation came after the company refused a demand from the Pentagon to lift guardrails on its AI technology. The supply chain risk label throws Anthropic's unfinished deals with several US companies into jeopardy.
“As an Anthropic partner, Microsoft believes the court should temporarily enjoin implementation of the determination for all existing contracts and their ongoing use of Anthropic products,” Microsoft's document read. The company said it supports Anthropic's refusal to bow to the Pentagon's wishes.
“AI should not be used to conduct domestic mass surveillance or put the country in a position where autonomous machines could independently start a war,” Microsoft said.
The show of support from Microsoft, which has since the late 1990s gone toe-to-toe with the US government over monopoly concerns, has given Anthropic's argument a significant boost.
Dean Ball, who worked on crafting US President Donald Trump's AI Action Plan and has since been critical of the White House's efforts to force Anthropic's hand, said Microsoft's support speaks volumes.
“I would not have predicted this and I think Microsoft deserves serious praise here,” he posted on X.
As the Pentagon was taking steps to to sever ties with Anthropic and demanding other US government agencies stop using its AI technology, Defence Secretary Pete Hegseth called out the company over its perceived “betrayal”.
“Anthropic delivered a masterclass in arrogance and betrayal, as well as a textbook case of how not to do business with the US government or the Pentagon,” he said. Washington “must have full, unrestricted access to Anthropic’s models for every lawful purpose in defence of the Republic”.
As Anthropic has several signed business contracts with the US government, the Pentagon's reaction has drawn concern.
“Whether you agree or disagree with Anthropic’s concerns about potential abuse if it were to remove the safeguards, the underlying fact pattern of the Pentagon’s retaliation over its refusal to remove them should concern you,” said Jennifer Huddleston, a technology policy fellow for the Cato Institute, a libertarian think tank in Washington.
She called the Pentagon's actions “an affront to the values enshrined in the First Amendment and those that the US has represented as a free society”.

Anthropic has gained some public sympathy. Its Claude AI app has soared to the top of the app download charts and many legal analysts view its efforts to hit back at the US government in court will yield results.
During a discussion on AI at the Brookings Institution in Washington, Democratic Senator Mark Kelly said he was concerned the Pentagon's actions had put Anthropic “at risk”, while also pointing out that there was no turning back from AI being used in military conflicts.
“I think we’ll find out a lot more later about this war against Iran about how these [AI] systems have been used effectively, and in other instances where there have been issues that we’ll have to work out going forward, but it is going to big part of our defence ecosystem now and in the future,” Mr Kelly said.
While the court battle unfolds, US and Israeli strikes on Iran continue and Washington has not been shy about touting the country's implementation of AI.
“Our war fighters are leveraging a variety of advanced AI tools,” said US Admiral Brad Cooper on Wednesday.
“These systems help us sift through vast amounts of data in seconds, so our leaders can cut through the noise and make smarter decisions faster than the enemy can react.”
Admiral Cooper, probably aware of the criticism over the ethics of using AI in conflict and its inherent potential for lack of accountability on the battlefield, also sought to address those concerns.
“Humans will always make the final decisions on what to shoot, what not to shoot and when to shoot,” he said, pointing out that AI is expediting much of the planning previously involved in military strikes.



