AI & ML Paradigm Shift

A statistical physics framework that predicts the fundamental limits of agentic self-improvement and nested LLM architectures.

March 26, 2026

Original Paper

A Theory of LLM Information Susceptibility

Zhuo-Yang Song, Hua Xing Zhu

arXiv · 2603.23626

The Takeaway

It introduces a theory of 'information susceptibility' to explain when LLM intervention actually improves system performance vs. when it hits a ceiling. This provides a principled way to design agentic systems, suggesting that nested, co-scaling architectures are a necessary condition for open-ended improvement.

From the abstract

Large language models (LLMs) are increasingly deployed as optimization modules in agentic systems, yet the fundamental limits of such LLM-mediated improvement remain poorly understood. Here we propose a theory of LLM information susceptibility, centred on the hypothesis that when computational resources are sufficiently large, the intervention of a fixed LLM does not increase the performance susceptibility of a strategy set with respect to budget. We develop a multi-variable utility-function fra