Social Science Paradigm Challenge

The idea that military AI is "precise" is basically a legal lie used to bypass international laws.

SSRN · March 13, 2026 · 6392939

Henning Lahmann

Why it matters

While most policy debates focus on making military AI 'less biased,' this paper demonstrates that these systems are inherently unable to classify individual humans as targets. It reveals that the discourse of AI 'accuracy' is primarily a strategy to make legally questionable deployments seem cautious and technologically required.

From the abstract

The article critiques the 'functionality assumption' regarding military AI and exposes the epistemic and discursive powers at states' disposal to rationalise the use of such models for targeting purposes. The development of so-called artificial intelligence-enabled decision support systems (AI-DSS) for military operations has recently come into focus in the context of Israel's onslaught on Gaza. However, states' and companies' claims concerning their allegedly incredible capabilities have largel