AI at war in Iran: Ruthless targeting machine or risky shortcut?


Thomas Harding
Add as a preferred source on Google
  • Play/Pause English
  • Play/Pause Arabic
Bookmark

In the first 11 days of the Iran war America achieved an astonishing 5,500 strikes against targets, using Artificial Intelligence on a large-scale battlefield for the first time.

Admiral Brad Cooper, commander of US Central Command, said on Wednesday that AI systems “allow us to analyse large volumes of data within seconds, enabling our leaders to make the right decisions faster than the enemy amid the noise”.

He argued that advanced AI tools turned processes that usually take hours, sometimes days, into seconds. “However, the final decision is always made by a human: what to strike, what not to strike, and when to strike,” he added.

But with the bombing of a girls’ school and other unexplained incidents, experts have questioned whether AI was too inaccurate and inexperienced to provide targeting in major combat operations.

In particular they have highlighted its ability to make mistakes and the experience from Gaza in which hundreds of Palestinians are believed to have been killed in AI-instigated attacks.

Admiral Charles Bradford 'Brad' Cooper II, Commander of US Central Command, has praised the capability of AI in aiding US attacks. AFP
Admiral Charles Bradford 'Brad' Cooper II, Commander of US Central Command, has praised the capability of AI in aiding US attacks. AFP

“AI is just not ready for military targeting, this stuff is only two or three years old, it's just too soon,” said Dr Peter Bentley, computer scientist at UCL.

Others expressed concerns on military personnel becoming over reliant on automaton, thinking it was infallible.

The faster tempo meant that commanders were “facing a greater pressure to create targets more quickly”, said Noah Sylvia, AI specialist at the Rusi think tank. “But you don't put speed over humanity.”

It was also clear that the US, unlike European militaries, was more tolerant of machine autonomy.

Quote
It’s really, really important when you're killing people that it should be a human making the life-or-death decisions
Dr Peter Bentley,
computer scientist at UCL

Automation bias

US targeting planners are using the Maven Smart System, developed by Palantir in 2018, where AI analyses data then identifies and prioritises targets.

It has recently had Anthropic’s Claude system embedded that processes and summarises intelligence coming in from the field, generating targets.

Claude easily “sits on top of your existing digital infrastructure” without any complicated integration “which is why everyone loves it,” said Mr Sylvia.

It receives raw data feeds from multiple sources including satellites, drones with hundreds of hours of footage plus decades of archived data and intelligence. “It's too much for humans to handle so you need automated tools that are able to make sense of it,” he added.

Ground crew prepare to load munitions onto a US Air Force B-1 bomber at RAF Fairford, England. Getty Images
Ground crew prepare to load munitions onto a US Air Force B-1 bomber at RAF Fairford, England. Getty Images

But he highlighted that "one of the uncertainties right now is US tolerance for automation,” under the Trump administration. “We don’t know if there is now a greater tolerance for civilian harm caused by automated systems or not.”

While there was always a human in the loop to make the final strike decision, said Nilza Amaral of Chatham House think tank, her concern was that “humans may rely too much on the system”.

With AI significantly untested in major conflict, the increased tempo of warfare and the speed of AI’s decision-making means there “is less time for human reflection and critical analysis” with an acceptance of the system’s power something experts call “automation bias”.

“There's a concern that targeting could end up just being a mere formality because of the automation bias, where people are just relying on what the machine is telling them,” she added.

AI tracking humans

In terms of tracing high value individuals and targeting them for assassination the speed at which AI operates had “played a really important role”, said Rusi’s Dr Thomas Withington, by its ability to track a person’s “electromagnetic footprint or fingerprint” particularly via phone use.

Previously, the high-end electronic eavesdropping agencies such as the UK’s GCHQ or US National Security Agency, found that process “incredibly time consuming”.

“AI speeds this up exponentially, and you probably only ever get a finite window to hit the person that you want to assassinate and anything that speeds up that process is huge. AI pays huge dividends in this regard.”

A protester holds a photograph of a young girl reported killed in the bombing of a primary school in Minab, Iran, during a solidarity rally in Serbia. EPA
A protester holds a photograph of a young girl reported killed in the bombing of a primary school in Minab, Iran, during a solidarity rally in Serbia. EPA

Schoolgirls killed

All that leads to questions over the targeting of the Shajareh Tayyebeh primary school in Minab by a Tomahawk cruise missile which killed 165 people, including 110 schoolgirls.

While a Pentagon investigation is continuing, it is known that the school used to be a military base raising the possibility that AI targeting was not updated.

“It's possible that this was an AI error but it’s difficult to say, because we don't know what intelligence is being fed into the system and how the system really is making decisions to generate targets,” said Ms Amaral.

Mr Sylvia agreed AI could have played a role, but ultimately it was a human error if the intelligence had not been updated.

Another worry, which could have impacted on the school strike, was that AI models did occasionally “flip a digit around, get a few words the wrong way around or just make something up” said Dr Bentley.

People and rescue forces work following a reported strike on a school in Minab, Iran. Reuters
People and rescue forces work following a reported strike on a school in Minab, Iran. Reuters

“Therefore it’s really, really important when you're killing people that it should be a human making the life-and-death decisions about other human beings, and they should be fully informed decisions not based on unreliable information,” he added.

There were “huge legal questions” over allowing AI “to do everything” from the detection to the prosecution of a target, said Dr Withington.

While a human was always needed in the command cycle to take responsibility for a decision “the problem is at the moment, we don't know what the targeting decisions are” or the current rules of engagement.

This could mean, for example, that a senior Iranian nuclear scientist was next to a girls’ school but the strike went ahead anyway.

Lavender revived

During the Gaza conflict an investigation by The National found that Palestinians with no connections to Hamas were being erroneously hit on multiple occasions.

The foremost targeting tool was the Lavender system, that had information on thousands of Palestinians, alongside “Gospel” which reads building and terrain. The system produced targets for intelligence operatives who would, it was reported, at times spend just 20 seconds verifying them before authorising a strike.

Ms Amaral contended that it was almost certain that Israel was using the same or similar system for Iranian targets. "We do know that Lavender did make errors in 10 per cent of cases, so we know it's a risk and these systems can make mistakes.”

Strike authorisation will “broadly accord with [a military’s] existing operating procedures”, said Mr Sylvia, adding that that the Israelis “had very high tolerance for civilian casualties”.

Although the US had a “mixed track record” on civilian harm they would have “a greater level of human involvement than the Israelis did, of more than 20 seconds,” he added.

Fact checking

The biggest problem with AI said the experts, was its habit of giving out false information which in a military context was even more dangerous.

“If you just give these types of systems open access to the web to just aggregate data, they will constantly bring up false information because they're trying to find sources that you want,” said Tal Hagin, who runs an AI fact-checking unit.

“The biggest danger with AI in general is that humans see it as an all-purpose solution, instead of something that can speed up processes.”

Another issue was how wording or information was inputted into the system to define targets. There was a question of accuracy during a war “when information is moving very, very fast and they have to quickly ascertain what targets to hit,” Mr Hagin said.

The US artificial intelligence safety and research company Anthropic has been used by the US military while targeting Iran. AFP
The US artificial intelligence safety and research company Anthropic has been used by the US military while targeting Iran. AFP

Key was how commands were given because “every little word has an impact on how the AI responds to you” and nuances “can change the output completely”.

But the fundamental issue was that while today’s large language models could summarise huge amounts of information, the unsolved problem was that “you can't tell the difference between a wonderfully accurate summary and something AI has just hallucinated”, said Dr Bentley.

And this was a serious issue for “safety critical applications especially in wartime”.

He gave an example of an initial security briefing summarised by AI where it might have “made a little factoid up” which could then influence a decision that would become amplified by other AI tools, repeating the initial falsehood.

“It's terrifying, because we don’t know who’s checking the facts and this brand-new technology is just being rushed through”.

Terminator’s return

While it might be regarded as science fiction, he did warn that AI could create a situation where “a lot of innocent people are going to be killed”.

As other academics have previously referenced, he also raised the scenario of the Terminator films where AI takes over weapons systems.

This was reinforced by recent game simulations with AI models playing each other “and they weren't afraid to press the nuclear button”.

“The problem is these machines are not fully rounded, reasoning, functioning human beings with moral compasses, these are just giant models that have seen an awful lot of our data and when they see gaps they just make stuff up.”

The Iran conflict has made clear that AI targeting has become a fundamental part of modern warfare yet its abilities, both good and bad, remain untested with the possibility of causing a catastrophe, as witnessed in Minab.

Updated: March 11, 2026, 5:10 PM