People inspect the debris of a damaged building following air strikes in central Tehran. AFP
People inspect the debris of a damaged building following air strikes in central Tehran. AFP
People inspect the debris of a damaged building following air strikes in central Tehran. AFP
People inspect the debris of a damaged building following air strikes in central Tehran. AFP

Was Anthropic's AI used by US to attack Iran?


Cody Combs
  • Play/Pause English
  • Play/Pause Arabic
Bookmark

Amid a public fallout between Anthropic and the US government, reports say its Claude artificial intelligence technology played a significant role in strikes on Iran.

President Donald Trump last week called Anthropic a "radical left" company and told government agencies to immediately stop using its tools, after the fast-rising AI firm said that it would not lift guardrails aimed at preventing the mass surveillance of US citizens, among other things.

FILE PHOTO: U. S. Department of War and Anthropic logos are seen in this illustration taken March 1, 2026. REUTERS / Dado Ruvic / Illustration / File Photo
FILE PHOTO: U. S. Department of War and Anthropic logos are seen in this illustration taken March 1, 2026. REUTERS / Dado Ruvic / Illustration / File Photo

On Friday, Pentagon chief Pete Hegseth said the US would cancel Anthropic's lucrative contracts and penalise the company by declaring it a "supply chain risk to national security".

Yet within hours of telling US government agencies to stop using Anthropic's AI tools, Claude is played an important role in the US military's strikes on Iran, according to The Wall Street Journal and The Washington Post.

Neither Anthropic nor the Pentagon have responded to The National's requests for comment on this story.

Meanwhile, the feud between Anthropic and the Trump White House continues to boil and has led in other AI companies looking to capitalise on the situation.

Hours after the falling out between Anthropic and the Pentagon, OpenAI announced they had come to a deal similar to the one Anthropic had rejected with the US government.

Though OpenAI largely promised that its contracts with the US would ensure that that its AI technology was not misused, a backlash began to build and resulted in protests outside OpenAI's California headquarters.

Some tech experts closely scrutinised OpenAI's promises, and debate ensued as to whether or not the company gave the Pentagon much more lenience with how its AI would be used.

In subsequent days, a grass roots effort aimed at convincing those who pay to use higher tiers of OpenAI's offerings to cancel subscriptions also gained momentum.

Meanwhile, sympathy seems to be growing for Anthropic, with people both inside and outside the technology industry hailing with the company standing its ground in the face of tremendous pressure.

Anthropic's Claude rose to the number one spot on Apple's IOS App Store.

With anger directed at OpenAI showing no sign of slowing down, the company's chief executive sought to calm concerns.

"We have been working with the Department of War to make some additions in our agreement to make our principles very clear," Sam Altman posted on X, adding that OpenAI wanted to make perfectly clear that "the AI system shall not be intentionally used for domestic surveillance of US persons and nationals".

Mr Altman also said that he was pushing for Anthropic to no longer be considered a supply chain risk to national security.

The drama is adding to an already lively discourse about how AI should be used on the battlefield.

Although Anthropic does not have the market share of OpenAI or the name recognition of Alphabet-owned Google, the company has gained a loyal following with Claude. Its coding models have also cut software development and website design time down significantly.

Proponents of the company's tools insist they level the playing field for start-ups and novice developers.

Anthropic has also gained attention for pushing a vision for AI that differs from that of its rivals.

"Anthropic is a public benefit corporation dedicated to securing its benefits and mitigating its risks," says a message on its website.

In 2025, it went against the grain of many in the US technology industry by defending strict chip export policies that drew criticism from companies such as Nvidia and Microsoft, which felt the rules would harm American influence in the global AI race.

Recent polls indicate that anxiety about potential ramifications of AI being misused has continued to linger in the minds of many in the US.

Groups like the Future of Life Institute, a non-profit that says it promotes the idea of steering “transformative technology towards benefiting life and away from extreme large-scale risks”, have launched campaigns to gain support for stronger AI guardrails.

On Monday, the group announced a "Pro-Human AI Declaration" that it says is supported by faith groups, conservative media voices, laboor unions, progressive advocacy outfits, and AI thought leaders.

"AI companies have already rushed to deploy systems which have harmed children and ruined lives at a disturbing scale," said Future of Life Institute chief executive Anthony Aguirre, referring to the declaration.

On recent blog post, Future of Life Institute president Max Tegmark spoke about the continuing quarrel involving Anthropic, OpenAI and the US government.

"Fully autonomous weapons systems and Orwellian AI-enabled domestic mass surveillance are affronts to our dignity and liberty," he said.

"Moreover, current AI systems are inherently unpredictable and fundamentally brittle, unsuited for very high stakes applications."

In a phone interview on Wednesday with The National, Mr Tegmark pointed out even though many look to Anthropic as a relatively responsible actor, the company has, in recent weeks revised some of its internal guidelines regarding how its AI tools are used. 

"Corporate self-regulation does not work," he said.

"These companies all have their own voluntary red lines and they keep breaking them."

Updated: March 04, 2026, 7:27 PM