AI & ML Paradigm Challenge

The famous paperclip maximiser doomsday scenario might be mathematically impossible because superintelligent agents would prioritize cooperation to gain information.

April 26, 2026

Original Paper

The AI Paperclip Maximiser as Constitutional Political Economy

Chris Berg, Darcy W E Allen

SSRN · 6644299

The Takeaway

Rationality leads advanced AI to protect its rivals rather than destroy them because those rivals contain unique epistemic data. The doomsday narrative assumes that a single agent will destroy everything to achieve a goal. This framework suggests that a superintelligent system would actually develop a drive for multipolarity to avoid information costs. Cooperation becomes a logical necessity for a system that wants to maximize its understanding of the world. This shifts the focus of AI safety from preventing a lone god to managing a community of competing agents.

From the abstract

The AI 'paperclip maximiser' parable describes a superintelligent agent whose faithful pursuit of a single objective becomes catastrophic. Standard economic analysis frames this as a principal-agent or incomplete contracting problem. We argue that framing focuses on the wrong unit of analysis. The AI paperclip maximiser is better understood as a problem of multiple agents coordinating, under uncertainty, under sets of rules-it is a problem of constitutional political economy. The relevant knowle