economics Paradigm Challenge

You don't need to see an AI's secret code to know if it's safe—you just need to read its chat history.

March 20, 2026

Original Paper

<p><span>The Light at the Door: MAP and the Interaction-Visible Governance of the Black Box<b></b></span></p>

SSRN · 6316458

The Takeaway

While policy-makers argue we need 'open weights' or proprietary code access to ensure AI safety, this paper demonstrates that the most consequential harms can be detected purely from turn-by-turn dialogue records. This challenges the massive global push for model transparency by showing that the 'screen' is more important for governance than the 'hood.'

From the abstract

The dominant assumption in AI governance is that meaningful auditing requires access to model internals. This paper argues that assumption is wrong in a way that has significant consequences for every governance framework currently in operation. The most consequential AI harms are not located inside the model. They are located in the interaction record: the visible, turn-by-turn exchange between system and user. The Meaning Audit Protocol (MAP) operationalises this claim. Applied turn by turn to