Introduces StatePlane, a model-agnostic memory architecture that enables long-horizon AI reasoning without expanding the context window or KV cache.
arXiv · March 17, 2026 · 2603.13644
The Takeaway
It treats memory as an evolving 'state plane' governed by cognitive principles (segmentation, selective encoding, and adaptive forgetting) rather than static storage. This allows LLMs/SLMs to maintain coherence over multi-session, long-running tasks that typically exceed hardware context constraints.
From the abstract
Large language models (LLMs) and small language models (SLMs) operate under strict context window and key-value (KV) cache constraints, fundamentally limiting their ability to reason coherently over long interaction horizons. Existing approaches -- extended context windows, retrieval-augmented generation, summarization, or static documentation -- treat memory as static storage and fail to preserve decision-relevant state under long-running, multi-session tasks. We introduce StatePlane, a model-a