AI in the Shadows: Israel’s Reported ChatGPT-Style Surveillance Tool Targets Palestinians
Advanced AI Systems Raise Ethical Alarms in Prolonged Occupation
A groundbreaking investigative report by The Cradle has exposed the development of an artificial intelligence (AI) system by Israeli intelligence agencies, modeled after OpenAI’s ChatGPT, to surveil, analyze, and target Palestinians in the occupied territories. The tool, described by sources as a “quantum leap” in digital surveillance, underscores the accelerating militarization of AI and its ethical ramifications in conflict zones.

The Tool: Capabilities and Functionality
According to the report, the AI system harnesses natural language processing (NLP) to parse vast datasets—including social media posts, encrypted communications, public records, and even traditional media—in real time. Its primary focus is Arabic-language content, enabling it to flag keywords, phrases, and behavioral patterns linked to perceived threats against Israeli security.
Unlike ChatGPT, which generates human-like text, this tool is designed for predictive analysis: identifying individuals or groups likely to engage in resistance activities. The system reportedly accelerates intelligence operations by automating the targeting process, reducing reliance on human analysts and enabling rapid decision-making for arrests, raids, or drone strikes.
Sources indicate the AI integrates with Israel’s existing surveillance infrastructure, such as the NSO Group’s Pegasus spyware and facial recognition networks, creating a seamless web of digital monitoring.
Operational Context: Occupation and Asymmetric Warfare
The tool’s development cannot be divorced from Israel’s 56-year occupation of Palestinian territories, where mass surveillance has long been a cornerstone of security strategy. Over 3 million Palestinians in the West Bank live under a matrix of checkpoints, biometric databases, and drone surveillance, while Gaza’s 2.3 million residents endure near-total digital and physical isolation.
The AI’s deployment aligns with Israel’s “precision warfare” doctrine, which emphasizes preemptive strikes and targeted assassinations. However, critics argue that “automating repression” risks dehumanizing Palestinians, exacerbating systemic biases, and increasing civilian harm.
Key Concerns:
False Positives: AI systems trained on historically skewed data may misinterpret benign activities (e.g., social media posts criticizing occupation policies) as threats.
Lethal Automation: The tool could expedite drone strike approvals without sufficient human oversight, echoing controversies around the US “kill list” algorithms.
Normalizing Mass Surveillance: Palestinians, including children, are already tracked via military databases like Blue Wolf. AI could deepen this ecosystem.
Ethical and Legal Quagmires
Human rights organizations have condemned the reported AI system as a violation of international law. Under the Fourth Geneva Convention, collective punishment and disproportionate surveillance of occupied populations are prohibited. Amnesty International has repeatedly accused Israel of apartheid policies, citing its use of technology to entrench control.
Francesca Albanese, UN Special Rapporteur on Palestine, stated in 2023: “Digital tools of oppression are becoming as lethal as physical weapons. The international community cannot turn a blind eye to algorithmic apartheid.”
Meanwhile, Israel’s military advocates defend the technology as a “necessary evolution” to combat groups like Hamas and Palestinian Islamic Jihad. “We face an enemy embedded in civilian populations,” said a former IDF intelligence officer, speaking anonymously. “AI helps minimize collateral damage.”
Israel’s Sinister Use of Ai in the Gaza Genocide
The "Lavender" and "Where’s Daddy?" AI systems are reportedly part of Israel’s military-intelligence infrastructure, specifically used in its operations in Gaza. These tools, revealed in investigative reporting by +972 Magazine and Local Call (March 2024), highlight the Israeli military’s growing reliance on artificial intelligence to identify and track targets—a practice critics argue has led to catastrophic civilian harm.
1. "Lavender": The AI Target Factory
Purpose: Designed to identify individuals suspected of being Palestinian militants or affiliated with groups like Hamas.
How It Works:
The system aggregates data from surveillance sources (drones, phone taps, facial recognition, social media, informants) to generate a database of "suspects."
Each person is assigned a "score" (1–100) based on AI-predicted likelihood of being a militant.
According to Israeli intelligence officers, during the 2021 Gaza war and 2023–25 genocide, Lavender flagged up to 37,000 Palestinians as potential targets, including low-level operatives.
Human oversight was minimal: Soldiers reportedly spent only 20 seconds reviewing each target before authorizing strikes, even in residential areas.
Controversies:
Civilian Casualties: The system allegedly classified individuals with tenuous militant ties (e.g., WhatsApp group members, relatives of suspects) as valid targets. Entire families were reportedly bombed based on AI recommendations.
Lax Thresholds: A "score" as low as 70% was sometimes deemed sufficient for strikes.
Mass Assassinations: Sources claim Israel approved "hundreds of collateral deaths" per senior Hamas operative, leading to widespread civilian fatalities.
2. "Where’s Daddy?": Tracking Targets in Real Time
Purpose: A complementary AI tool designed to monitor the locations of Lavender-identified targets.
How It Works:
Tracks suspects via mobile phones, smart devices, and surveillance networks.
Alerts the military when targets enter pre-approved strike zones (e.g., their homes, which Israel often classifies as "military sites").
Enabled "just-in-time" assassinations, with strikes frequently launched moments after targets arrived at locations.
Controversies:
Family Annihilation: Strikes were often carried out at night when targets were home, resulting in mass civilian casualties, including women and children.
Lack of Verification: Reliance on flawed AI data (e.g., phone signals shared among family members) led to misidentification.
Military Justification vs. Reality
The IDF claims such tools minimize civilian harm by enabling "surgical strikes." However, data from Gaza tells a different story:
Over 34,000 Palestinians killed in the 2023–24 Gaza war, mostly civilians.
UN experts estimate 40–70% of casualties are women and children.
Reports of "double-tap strikes" (targeting first responders) and attacks on refugee camps.
Global Implications: The AI Arms Race
Israel’s reported system is part of a global trend where nations weaponize AI for national security. China employs similar tools to monitor Uyghurs, while the US uses predictive algorithms in drone warfare. However, Israel’s program is notable for its focus on a stateless population under prolonged occupation—a legal and moral gray zone.
Tech ethics experts warn that unregulated AI sets a dangerous precedent in conflict zones. “Once these systems are normalized in places like Palestine, they’ll inevitably spread to other regions,” said Rasha Abdul-Rahim, director of Amnesty Tech. “Democracies will adopt them for border control; dictatorships will exploit them to crush dissent.”
Silicon Valley’s Role
While the AI’s developers remain unnamed, The Cradle notes that Israeli firms like NSO Group, Candiru, and Cytrox have long collaborated with global tech giants and venture capital firms. Many emerged from Unit 8200, the IDF’s elite cybersecurity division, and maintain close ties to Israeli intelligence.
This raises questions about the complicity of Western tech ecosystems. Microsoft, Google, and Amazon Web Services (AWS) have faced scrutiny for providing cloud infrastructure to governments accused of human rights abuses.
Palestinian Response: Digital Resistance
Palestinian civil society groups are increasingly advocating for “digital sovereignty” to counter Israel’s tech dominance. Initiatives like the 7amleh Center promote cybersecurity training and secure communication tools. However, resource disparities remain stark.
“We’re fighting algorithms with slogans,” said Nadim Nashif, founder of Palestinian digital rights group Sada Social. “The power imbalance isn’t just physical—it’s now encoded in ones and zeros.”
Tech For Palestine is a brilliant resource for pro-Palestine tech and other online tools.
Calls for Accountability
The report has renewed demands for binding international regulations on AI in military contexts. While the EU’s AI Act and US executive orders on AI ethics are steps forward, they lack enforcement mechanisms for conflict zones.
Mustafa Barghouti, a Palestinian legislator, urged the International Criminal Court (ICC) to investigate: “This isn’t just about Palestine—it’s about whether humanity will allow machines to decide who lives and who dies.”
Conclusion
Israel’s reported AI tool epitomizes a dystopian crossroads where technology meets occupation. While proponents frame it as a precision instrument, its deployment against a stateless people underscores the urgent need for global oversight. As AI reshapes modern warfare, the line between security and oppression grows perilously thin—and Palestine remains the testing ground.
This story is ongoing. Updates will follow as additional details emerge.
Ai is the latest oxymoron. Perfect scapegoat for committing atrocities and torturing people without actually admitting to the obvious crimes.
I also discus about the issue in the following
https://mywisdom.substack.com/p/the-palestinian-holocaust-2023-2025
Thank you for your work and thank you for sharing
The entire AI psyop umbrella is almost certainly a vehicle for neofascist insurgency by top-down design in light also of technocrat Felon Musk's undue influence here in America. Sad to say, the state of current reality is snowballing from worse to the worst.
Anyhow, it's awesome seeing you back writing "full" Stack postings, Nuno. Hope your life is faring as well as can be in spite of these tumultuous times, brother.