AI & ML Practical Magic

Stop wasting tokens on repeated RAG lookups; building an internal knowledge wiki for your agents cuts costs by 84.6%.

April 15, 2026

Original Paper

Knowledge Compounding: An Empirical Economic Analysis of Self-Evolving Knowledge Wikis under the Agentic ROI Framework

arXiv · 2604.11243

The Takeaway

Standard RAG systems treat every query like a fresh search, re-reading the same context repeatedly and burning through token budgets. This paper introduces a 'Knowledge Compounding' layer where agents build and refine a persistent, structured wiki of their own insights. The economic results are staggering: an 84.6% reduction in token consumption compared to standard retrieval-augmented generation. It changes the agent paradigm from 'searching for answers' to 'accumulating equity in knowledge.' For developers, this is the blueprint for building agents that actually get smarter and cheaper the more you use them.

From the abstract

Building on the Agentic ROI framework proposed by Liu et al. (2026), this paper introduces knowledge compounding as a new measurable concept in the empirical economics of LLM agents and validates it through a controlled four-query experiment on Qing Claw, an industrial-grade C# reimplementation of the OpenClaw multi-agent framework. Our central theoretical claim is that the cost term in the original Agentic ROI equation contains an unexamined assumption -- that the cost of each task is mutually