Social Science Paradigm Challenge

Weirdly, the more rules we make for AI safety, the more "incidents" and glitches we actually see.

SSRN · March 13, 2026 · 6281098

Jason Gagne

Why it matters

Analysis of over 1,300 AI safety incidents shows that as countries or companies adopt more 'mature' governance frameworks, the number of accidents actually rises. This suggests that current AI policies act more like a reporting system for failures rather than a preventative measure to stop them.

From the abstract

<div> AI governance frameworks have become the dominant policy response to AI risk across government, defense, and commercial sectors. This paper argues that these frameworks suffer from a fundamental structural problem: they are modeled on human regulatory theory, which was never designed to operate without the cultural, social, and psychological infrastructure that co-evolved alongside human law. We term this the Behavioral Sufficiency Problem. </div> <div> <br> </div> <div> Using original emp