AI Is Automating the Genocide in Gaza

Israel bombs the Gaza Strip on October 9, 2024.
Israel bombs the Gaza Strip on October 9, 2024. ATIA MOHAMMED / FLASH90

AI isn’t going to save us — that should be clear by now. We are drowning in a tar pit of hallucinatory gibberish and the world’s ugliest images. OpenAI lost $5 billion this year operating ChatGPT; most workers that have incorporated AI into their jobs have reported decreased efficiency. And yet the industry trudges on, powered by skyrocketing fossil fuel emissions and the mass exploitation of workers in the Global South.

But AI companies need to turn a profit somehow, and increasingly, that’s meant turning to defense projects. OpenAI went private this year in the search for military contracts and has begun adding former soldiers to its board of directors. For other companies, the path to success looks like collaboration with the Israeli regime.

Israel has long used facial recognition technology to surveil Palestinians in Gaza and the West Bank. In 2021, Google and Amazon signed onto Project Nimbus, a $1.2 billion cloud contract with the Israeli government. In 2022, The Intercept revealed that Project Nimbus involved AI and machine learning, including facial detection and “sentiment analysis,” a form of predicting emotions from faces that amounts to little more than race science. In fact, the head of Israel’s National Cyber Directorate, Gaby Portnoy, has publicly credited the contract with giving Israel “phenomenal” military infrastructure for their ongoing genocide in Gaza.

Google, for one, has pitched its ChatGPT competitor, Gemini, to agencies like the Israeli police as part of a collaboration that Portnoy has suggested would strengthen Israeli security and military apparatuses. Knowing what we do about large language models like Gemini and their tendencies to produce false data, the consequences of using them as a military tool seem increasingly dire.

We don’t even have to imagine what these consequences look like. Beyond Project Nimbus, the Israeli Occupation Forces’ use of AI systems has already subjected Gaza to the devastating automation of death. Through AI systems known as “Habsora,” or “The Gospel,” and “Lavender,” the IOF is able to automatically mark and target buildings and individuals with alleged Hamas ties. The military operative overseeing the ensuing attacks, then, must only verify the human target as male before authorizing the bombing.

Indeed, Palestinians have been reduced to data points whose decimation requires little human thought beyond a click of a button. Last year, a source told +972 Magazine and Local Call that each bombing authorization only takes about 20 seconds.

The use of AI enables IOF operatives to constantly search for more targets and more victims with shocking ease. Immediately after October 7th, civilian casualties in Gaza soared to an apocalyptic magnitude, made possible in part by the AI “mass assassination factory.” Using Hamas as an excuse, the IOF detonates private residences and public buildings alike, murdering hundreds of civilians with each military operation. As one member of Israel’s intelligence community told +972: “Nothing happens by accident. When a three-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed [...] We know exactly how much collateral damage there is in every home.”

If anything, the elimination of entire Palestinian families is desired by Israel: Its “Where’s Daddy?” system waits to bomb people until they reenter their family homes. As another intelligence officer stated: “It’s much easier to bomb a family’s home. The system is built to look for them in these situations.”

Israel’s method of dropping bombs also runs on AI. Elbit Systems, the largest Israeli arms manufacturer, has created autonomous drones called Lanius to further surveil and control the skies of Gaza. In essence, a Palestinian person can be identified, targeted, and killed all through AI, without the intervention of a single human being. This is the automation of mass murder. This doesn’t mean Israeli political and military officials aren’t aware, but precisely that those behind the targeting know the extent of devastation which their AI-fueled killing system will bring. And AI makes it easier and easier for them to publicly distance themselves from this devastation.

Naturally, the AI systems in question do not even serve the purpose Israeli officials might claim they serve. Lavender operates by assessing individual Palestinians’ likelihood of having ties to Hamas. Yet while Israel’s stated objective is the eradication of Hamas, Lavender was trained on data that included not only Hamas members, but also civil defense workers and non-militant employees of the Internal Security Ministry. Some of the traits determined to be associated with Hamas involvement — such as the frequent changing of cell phones — are also common civilian practices.

These truths, combines with AI’s existing predilection for falsification and the IOF’s exhibited willingness to bomb civilians, mean that even by the IOF’s internal checks, at least 10 percent — and likely even more — of AI-identified assassination targets have no ties to the Hamas militant branch.

But as the saying goes, the purpose of a system is what it does. And as the AI boom devolves into a genocidal bust, one of AI’s greatest impacts has become abundantly clear: the automation and acceleration of Palestinian annihilation.