SeriesFusion
Science, curated & edited by AI
Paradigm Challenge  /  Economics

Weirdly, the more rules we make for AI safety, the more "incidents" and glitches we actually see.

Analysis of over 1,300 AI safety incidents shows that as countries or companies adopt more 'mature' governance frameworks, the number of accidents actually rises. This suggests that current AI policies act more like a reporting system for failures rather than a preventative measure to stop them.

Original Paper

The Behavioral Sufficiency Problem

Jason Gagne

SSRN  ·  6281098

<div> AI governance frameworks have become the dominant policy response to AI risk across government, defense, and commercial sectors. This paper argues that these frameworks suffer from a fundamental structural problem: they are modeled on human regulatory theory, which was never designed to operate without the cultural, social, and psychological infrastructure that co-evolved alongside human law. We term this the Behavioral Sufficiency Problem. </div> <div> <br> </div> <div> Using original emp