AI from Anthropic, OpenAI, Google: Allegations Point to US Building 'Kill Lists' for Gaza and Beyond
The core allegation involves the US government supposedly deploying major commercial AI models—Anthropic, OpenAI, Google, and xAI—for Signals Intelligence. This alleged use targets the autonomous generation of 'kill lists' for bombing operations.
One user, Michael Altfield, connected this directly to historical atrocities, drawing parallels between Nazi industrial complexes (IG Farben/Bayer/BASF) and modern AI data centers, labeling the whole machinery a 'machinery of war.' Altfield also cited a 2024 report alleging Israeli use of AI in Gaza bombing decisions and speculated on US capability for targeting civilian sites, referencing the 2026 Minab bombing.
The discussion weight rests entirely on these specific, alarming claims linking corporate AI development to real-world military targeting decisions. The fault line is between reporting these severe allegations and dismissing them as unfounded conjecture.
Key Points
#1US allegedly uses major AI firms for targeting.
Michael Altfield claims the US government uses Anthropic, OpenAI, Google, and xAI for Signals Intelligence to create bombing 'kill lists'.
#2Direct comparison to historical war crimes.
Michael Altfield draws explicit parallels between modern AI data centers and Nazi industrial structures like IG Farben/Bayer/BASF.
#3Allegation of AI in Gaza targeting.
The analysis references a 2024 article from +972 detailing alleged Israeli military use of AI in Gaza bombing decisions.
#4Speculation on US targeting capability.
Altfield alleges the US built a similar AI system for targeting schools and hospitals, citing the 2026 bombing of Minab, Iran.
Source Discussions (7)
This report was synthesized from the following Lemmy discussions, ranked by community score.