Introduces a discrete-ratio selector for context compression that solves the problem of variable information density in long-form text.
March 30, 2026
Original Paper
Density-aware Soft Context Compression with Semi-Dynamic Compression Ratio
arXiv · 2603.25926
The Takeaway
Existing soft-prompt compression methods use static ratios, wasting tokens on 'fluff' and losing details in dense data. This framework allows LLMs to adaptively allocate compression budget based on intrinsic density, achieving a much better performance-to-compute Pareto frontier for long-context tasks.
From the abstract
Soft context compression reduces the computational workload of processing long contexts in LLMs by encoding long context into a smaller number of latent tokens. However, existing frameworks apply uniform compression ratios, failing to account for the extreme variance in natural language information density. While adopting a density-aware dynamic compression ratio seems intuitive, empirical investigations reveal that models struggle intrinsically with operations parameterized by input dependent,