Israeli military AI system led to mass killing of civilians, alleges report

Army denies reports it used Lavender system to draw up list of thousands of targets for air strikes, with officers given permission to kill civilians

Gaza has been under intense Israeli bombardment for six months. AP
Powered by automated translation

Live updates: Follow the latest on Israel-Gaza

Israel has been accused of using an artificial intelligence system to identify targets for air strikes in Gaza that permitted the mass killing of civilians.

A new investigation by the Israel-based +972 Magazine and Local Call alleges that the Israeli military used an AI-powered database known as Lavender to generate a list of 37,000 potential targets with apparent links to Hamas.

Six unnamed Israeli intelligence sources who spoke to +972 said Israeli military officials used the list of targets to authorise air strikes that led to unprecedented civilian casualties in Gaza, where more than 33,000 Palestinians have been killed since October 7.

According to the report, the targets identified by Lavender were often of questionable accuracy, but military officials approved the use of the system after a sample found it had a 90 per cent accuracy rate.

Before the current war, Israeli intelligence operatives planning air strikes would need a lawyer to sign off on a strike once an assessment had been made that the target was valid and that minimising civilian harm was considered.

This changed in the aftermath of October 7, when the Lavender system was given approval to draw up a target list of Hamas's "junior operatives", said the report.

According to one Israeli source, many of those assessed as targets posed no risk to Israeli soldiers and worked in civilian roles. Sources said that the machine flagged individuals who had similar communication patterns to Hamas members, including civil defence workers, as well as residents who had similar names to Hamas operatives.

Chillingly, the sources say that Israeli intelligence officers conducted only minimal checks on the machine, with one saying he invested only 20 seconds examining each target, mainly to see if the target was a man.

He told +972 that he could work through “dozens of them every day. I had zero added value as a human, apart from being a stamp of approval. It saved a lot of time”.

The sources claimed that assessments of civilian harm could allow the deaths of 15 or 20 civilians in the process of killing even a junior Hamas operative. One source said that in the case of one senior Hamas commander, they were given permission to kill "over 100 civilians".

The military frequently made the decision to bomb suspected militants inside their households, and used "dumb bombs" that caused collateral damage rather than precision-guided strikes.

These practices caused the "unprecedented" civilian death toll and the disproportionately high number of women and children killed, the report said.

'Hiding behind an algorithm'

The reports of the use of AI in warfare raises troubling questions about the ethics of modern warfare and how Israel, a world leader in technological innovation, is using its expertise to kill Palestinians.

The report suggests that, if confirmed, Israel's use of AI systems would appear to have broken the principle of proportionality – a key element of international humanitarian law that outlines how an army must assess risk to civilians in any attack on military targets.

One of the sources quoted in the report said that "in practice, the principle of proportionality did not exist".

When it comes to accountability for crimes in war, one source who works in the AI industry told The National they were concerned that focusing on programmes could deflect blame over atrocities that are ultimately committed by humans.

"Hiding behind an algorithm is a massive problem, because it is still very much humans deciding if they want to use these programmes for the simple goal of eradicating a group of people," they told The National.

"This is really a question of causality. Is it the case that using an automated system at this specific part of the kill chain – and the cover it affords those who implement the programme – protects civilian lives or, in actual fact, increases the number of people killed and increases the escalation potential in the war?"

The Israeli military has rejected the claims in the +972 Magazine report, issuing a statement saying it “does not use an artificial intelligence system that identifies terrorist operatives or tries to predict whether a person is a terrorist”.

It said it had used a “database whose purpose is to cross-reference intelligence sources, in order to produce up-to-date layers of information on the military operatives of terrorist organisations”, adding that “identified targets meet the relevant definitions in accordance with international law and additional restrictions stipulated in the IDF directives.

“IDF reviews targets before strikes and chooses the proper munition in accordance with operational and humanitarian considerations, taking into account an assessment of the relevant structural and geographical features of the target, the target’s environment, possible effects on nearby civilians, critical infrastructure in the vicinity, and more.”

Updated: April 05, 2024, 6:35 AM