AI & ML Scaling Insight

Discovers a multiplicative scaling law governing how LLMs revise their beliefs during iterative reasoning (CoT, reflection).

March 23, 2026

Original Paper

The α-Law of Observable Belief Revision in Large Language Model Inference

Mike Farmer, Abhinav Kochar, Yugyung Lee

arXiv · 2603.19262

The Takeaway

It identifies the 'belief revision exponent' that determines the stability of long-run model reasoning. This provides a mathematical framework for practitioners to predict and control whether multi-step reasoning agents will converge on a stable answer or diverge into instability.

From the abstract

Large language models (LLMs) that iteratively revise their outputs through mechanisms such as chain-of-thought reasoning, self-reflection, or multi-agent debate lack principled guarantees regarding the stability of their probability updates. We identify a consistent multiplicative scaling law that governs how instruction-tuned LLMs revise probability assignments over candidate answers, expressed as a belief revision exponent that controls how prior beliefs and verification evidence are combined