economics Paradigm Challenge

If you try to make multiple AI agents work together as a team, the whole system actually fails twice as fast as just using one.

March 25, 2026

Original Paper

ARC Perspective on Multi-Agent Systems Authority Diffusion, Contamination, and Responsibility Collapse

Do-Hyoung Kim

SSRN · 6261378

The Takeaway

The industry is currently rushing toward 'Multi-Agent Systems' to solve complex problems, but this research finds they suffer from 'Interpretation Contamination.' Instead of correcting each other, the agents create a feedback loop of internal logic that causes their intelligence to collapse and their safety guidelines to fail much faster than a lone model.

From the abstract

Recently, the artificial intelligence industry has rapidly shifted toward Multi-Agent Systems (MAS) that combine multiple agents in order to overcome the limitations of a single model. However, this paper identifies that such a trend is not a technological evolution, but a form of structural deviation that results in the evasion of responsibility attribution and the neutralization of guidelines. This study adopts the ARC (Authority First) framework as its analytical reference point and dissects