Israel’s reported use of AI in its Gaza war may explain thousands of civilian deaths

Palestinians living in al-Maghazi Refugee Camp collect the usable items among the rubble of the destroyed buildings following an Israeli attack in Deir al-Balah, Gaza on April 4, 2024.
Palestinians living in al-Maghazi refugee camp collect usable items from the rubble of buildings following an Israeli attack in Deir al-Balah, Gaza, on April 4, 2024.
Ashraf Amra—Anadolu/Getty Images

For years, experts have warned about the dangers of using AI in warfare. Much of the coverage of their warnings has focused on the nightmare scenario of Terminator-style autonomous weapons, but that’s not the only dystopian possibility in the technology’s battlefield deployment, as Israel has reportedly demonstrated in its war against Hamas in Gaza.

The Israeli sister publications +972 and Local Call yesterday published a lengthy account of an AI-based system called Lavender, which the Israeli Defense Forces have been using to identify targets for assassination (+972 also shared the accounts with the Guardian). Historically, assassinations have only been allowed when the target has been thoroughly assessed and found to be a senior Hamas operative. But this time, following the horror of the Oct. 7 attacks, it seems the IDF engaged in a much more indiscriminate program of killing, the article said.

According to the publications’ multiple sources within Israeli intelligence, Lavender was trained on multiple kinds of data—from photos and cellular information to communication patterns and social media connections—to recognize the characteristics of known Hamas and Palestinian Islamic Jihad operatives. The dataset also included some Palestinian civil defense workers. Then Lavender was used to assign a score of one to 100 for almost everyone in Gaza, based on how many characteristics they matched. Those with high ratings became potential assassination targets, with Lavender’s list reportedly including as many as 37,000 people at one point.

Despite the fact that Israeli intelligence knew the system was only 90% accurate in identifying militants, little in the way of human review followed, the sources said. According to the intelligence officers, the list was so long, and the majority of people on it of such low importance, that they only spent seconds verifying the target, mostly by confirming that it was a man. None of these sources is named in the article, and Fortune cannot verify their accounts.

In a lengthy statement issued to the Guardian after the story’s publication, the IDF said it “directs its strikes only towards military targets and military operatives and carries out strikes in accordance with the rules of proportionality and precautions in attacks. Exceptional incidents undergo thorough examinations and investigations.” It denied using any AI system “that identifies terrorist operatives or tries to predict whether a person is a terrorist” and described Lavender as a “database whose purpose is to cross-reference intelligence sources.” The IDF added that it “does not carry out strikes when the expected collateral damage from the strike is excessive in relation to the military advantage.”

However, the way in which Lavender’s output was reportedly used may explain some of the extraordinary civilian death toll, particularly early in the war. Nearly 15,000 Palestinians died in the first six weeks, most of them women and children.

Not only were thousands of junior Hamas operatives targeted, but the IDF reportedly took the unprecedented step of deciding that as many as 15 or 20 people would be acceptable collateral damage in each case, the article said. The IDF also tended to bomb their targets at home, using different automated systems that decided when the target had arrived at home—there is a very high level of surveillance in Gaza. According to +972’s sources, some systems often inaccurately estimated the number of other people who were also in the building. Because of the low value of the targets, dumb bombs were routinely used, further increasing the civilian death toll, the sources said.

“Once you go automatic, target generation goes crazy,” one source said.

The lessons here are obvious for any war, present or future. If you have technology as described in the +972 article, everything depends on how you use it. If you are trying to minimize deaths, you will be extremely cautious, but if you find many civilian deaths acceptable, then you will get them. Letting machines make life-or-death decisions is a human decision, as is the level of oversight applied.

Last year, the U.S. organized a Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, which over 50 countries have now signed—Israel is not one of them, nor are Russia, China, and other notable names. The agreement states that military use of AI must comply with international law, particularly humanitarian law, and be “ethical, responsible, and enhance international security.” It also says that “a principled approach to the military use of AI should include careful consideration of risks and benefits, and it should also minimize unintended bias and accidents.”

But even if Israel had signed it, that AI agreement is nonbinding, as was the United Nations General Assembly resolution last December that expressed concerns about the development of lethal autonomous weapons (Israel abstained on that vote). Perhaps what is emerging about AI’s role in Gaza will encourage the world to negotiate an actual treaty on such things.

More news below. And by the way, I’ve made corrections to the online version of yesterday’s essay, in which I erroneously said Intel wouldn’t be using extreme ultraviolet lithography in its upcoming 18A chipmaking process—sorry about that.

David Meyer

Want to send thoughts or suggestions to Data Sheet? Drop a line here.

NEWSWORTHY

Google AI search subscriptions. Google may make people pay to use some of its new AI search features, in what would be a fundamental shift for the core-services-are-free giants. According to the Financial Times, Google’s management still hasn’t decided “whether or when” to launch this paid service. Either way, traditional Google Search would remain free to use.

BlackBerry harassment suit. BlackBerry (remember them?) is being sued by a former employee, as is CEO John Giamatteo. The complainant says Giamatteo sexually harassed her when he was head of BlackBerry’s cybersecurity business, then bad-mouthed her within the company after she spurned his advances, the Canadian Press reports. She says the firm’s leaders knew about her complaints when they named Giamatteo as CEO in December. BlackBerry and Giamatteo deny the allegations.

Outage Day. Meta’s WhatsApp, Instagram, and Facebook all went down briefly yesterday, for as-yet-unexplained reasons, the BBC reports. Also yesterday, a bunch of Apple’s services went down in multiple regions, including the App Store, Apple TV+, and Apple Music, while some disruption to other Apple services was also reported. Again, no explanation, Reuters reports.

SIGNIFICANT FIGURES

208 million pounds

—The amount of plastic waste from Amazon’s packaging in 2022, just in the U.S. That’s according to the environmental nonprofit Oceana, which says that’s enough plastic pillows to encircle the Earth more than 200 times. However, while Amazon’s U.S. plastic waste continues to grow, it’s falling in other markets.

IN CASE YOU MISSED IT

The cost of training AI could soon become too much to bear, by David Meyer

TSMC shrugs off Taiwan’s biggest earthquake in 25 years, showing its massive chip foundry mega-complexes are nearly quake-proof, by Sasha Rogelberg

Foundation laid for ‘Silicon Heartland’ in Indiana with $3.9 billion semiconductor research and development campus, South Korean CEO says, by the Associated Press

Analysts explain the rise of China’s EV sector—and how it might navigate protectionist backlash and Beijing withdrawing support, by Lionel Lim

Elon Musk’s X names new head of safety after position was open for 9 months, by the Associated Press

Cisco CEO Chuck Robbins shares the ‘throwaway line’ that helped get him the job, by Alan Murray

BEFORE YOU GO

That guy who saved the internet. The German-born Microsoft engineer Andres Freund is being feted by all of geekdom after he found a very dangerous backdoor in a new version of a tool that’s included with many Linux distributions. His revelation led to a quick fix, potentially staving off a major global cyberattack. But who is/are “Jia Tan,” the person/people responsible for inserting this vulnerability into XZ Utils? Whoever they are, they played a long game, contributing code to this and other projects before largely taking over control of XZ Utils last year. Experts suspect a state-sponsored operation, Wired reports.

This is the web version of Data Sheet, a daily newsletter on the business of tech. Sign up to get it delivered free to your inbox.