Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ParisKV: Fast and Drift-Robust KV-Cache Retrieval for Long-Context LLMs

About

KV-cache retrieval is essential for long-context LLM inference, yet existing methods struggle with distribution drift and high latency at scale. We introduce ParisKV, a drift-robust, GPU-native KV-cache retrieval framework based on collision-based candidate selection, followed by a quantized inner-product reranking estimator. For million-token contexts, ParisKV supports CPU-offloaded KV caches via Unified Virtual Addressing (UVA), enabling on-demand top-$k$ fetching with minimal overhead. ParisKV matches or outperforms full attention quality on long-input and long-generation benchmarks. It achieves state-of-the-art long-context decoding efficiency: it matches or exceeds full attention speed even at batch size 1 for long contexts, delivers up to 2.8$\times$ higher throughput within full attention's runnable range, and scales to million-token contexts where full attention runs out of memory. At million-token scale, ParisKV reduces decode latency by 17$\times$ and 44$\times$ compared to MagicPIG and PQCache, respectively, two state-of-the-art KV-cache Top-$k$ retrieval baselines.

Yanlin Qi, Xinhang Chen, Huiqiang Jiang, Qitong Wang, Botao Peng, Themis Palpanas• 2026

Related benchmarks

TaskDatasetResultRank
Long-context Language UnderstandingLongBench v2
Overall Accuracy33.07
20
ReasoningAIME 25
Pass@880
16
Long-generation reasoningGPQA Diamond
pass@172.22
12
Long-generation reasoningMATH500
pass@10.93
12
Showing 4 of 4 rows

Other info

Follow for update