Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DeltaKV: Residual-Based KV Cache Compression via Long-Range Similarity

About

The deployment of efficient long-context LLMs in applications like autonomous agents, long-chain reasoning, and creative writing is fundamentally bottlenecked by the linear growth of KV cache memory. Existing compression and eviction methods often struggle to balance accuracy, compression ratio, and hardware efficiency. We propose DeltaKV, a residual-based KV cache compression framework motivated by two empirical findings: long-range inter-token similarity and highly shared latent components in KV representations. Instead of discarding tokens, DeltaKV encodes semantic residuals relative to retrieved historical references, preserving fidelity while substantially reducing storage. To translate compression gains into real system speedups, we further introduce Sparse-vLLM, a high-performance inference engine with decoupled memory management and kernels optimized for sparse and irregular KV layouts. Experiments show that DeltaKV reduces KV cache memory to 29\% of the original while maintaining near-lossless accuracy on LongBench, SCBench, and AIME. When integrated with Sparse-vLLM, it achieves up to 2$\times$ throughput improvement over vLLM in long-context scenarios, demonstrating a practical path toward scalable long-context LLM deployment. Code, model checkpoints, and datasets are available at https://github.com/CURRENTF/Sparse-vLLM.

Jitai Hao, Qiang Huang, Yaowei Wang, Min Zhang, Jun Yu• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningAIME
AIME Accuracy43.3
283
Long-context UnderstandingLongBench
Overall Average Score50.3
115
Long-context Language UnderstandingSCBench
KV Retrieval62.4
16
Showing 3 of 3 rows

Other info

Follow for update