AI & ML Efficiency Breakthrough

Achieves 99.5% performance on Needle-In-A-Haystack benchmarks while retaining only 3% of the KV cache budget.

arXiv · March 13, 2026 · 2603.11564

Zhenxu Tian, Yi Su, Juntao Li, Min Zhang

Why it matters

This identifies that positional information is more critical than semantic content for KV cache eviction. By using 'position-aware pseudo queries' to simulate future decoding, it provides a massive 33x compression ratio, directly addressing the primary memory bottleneck for long-context LLM inference.

From the abstract

The Key-Value (KV) cache is crucial for efficient Large Language Models (LLMs) inference, but excessively long contexts drastically increase KV cache memory footprint. Existing KV cache compression methods typically rely on input-side attention patterns within a prompt observation window to estimate token importance during the prefill stage. They fail to preserve critical tokens for future generation since these assessments are not derived from the decoding process. Intuitively, an effective obs