Challenges the monotonic 'bigger is better' scaling paradigm by proving that institutional fitness peaks at an environment-dependent scale.
arXiv · March 17, 2026 · 2603.14126
The Takeaway
The paper provides mathematical proof of 'Capability-Trust Divergence,' where scaling a model beyond a certain point can decrease its institutional utility due to trust, sovereignty, and cost factors. This suggests a phase transition away from frontier generalists toward orchestrated systems of smaller, domain-specialized models.
From the abstract
Classical scaling laws model AI performance as monotonically improving with model size. We challenge this assumption by deriving the Institutional Scaling Law, showing that institutional fitness -- jointly measuring capability, trust, affordability, and sovereignty -- is non-monotonic in model scale, with an environment-dependent optimum N*(epsilon). Our framework extends the Sustainability Index of Han et al. (2025) from hardware-level to ecosystem-level analysis, proving that capability and tr