Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LookaheadKV: Fast and Accurate KV Cache Eviction by Glimpsing into the Future without Generation

About

Transformer-based large language models (LLMs) rely on key-value (KV) caching to avoid redundant computation during autoregressive inference. While this mechanism greatly improves efficiency, the cache size grows linearly with the input sequence length, quickly becoming a bottleneck for long-context tasks. Existing solutions mitigate this problem by evicting prompt KV that are deemed unimportant, guided by estimated importance scores. Notably, a recent line of work proposes to improve eviction quality by "glimpsing into the future", in which a draft generator produces a surrogate future response approximating the target model's true response, and this surrogate is subsequently used to estimate the importance of cached KV more accurately. However, these approaches rely on computationally expensive draft generation, which introduces substantial prefilling overhead and limits their practicality in real-world deployment. To address this challenge, we propose LookaheadKV, a lightweight eviction framework that leverages the strength of surrogate future response without requiring explicit draft generation. LookaheadKV augments transformer layers with parameter-efficient modules trained to predict true importance scores with high accuracy. Our design ensures negligible runtime overhead comparable to existing inexpensive heuristics, while achieving accuracy superior to more costly approximation methods. Extensive experiments on long-context understanding benchmarks, across a wide range of models, demonstrate that our method not only outperforms recent competitive baselines in various long-context understanding tasks, but also reduces the eviction cost by up to 14.5x, leading to significantly faster time-to-first-token. Our code is available at https://github.com/SamsungLabs/LookaheadKV.

Jinwoo Ahn, Ingyu Seong, Akhil Kedia, Junhan Kim, Hyemi Jang, Kangwook Lee, Yongkweon Jeon• 2026

Related benchmarks

TaskDatasetResultRank
Multi-turn Dialogue EvaluationMT-Bench
Overall Score8.51
447
Long-context Language UnderstandingLongBench
M-Avg45.77
292
Long-context language modelingLongBench
Average Score40.01
164
Long-context UnderstandingLongBench (test)
Avg Score49.33
136
Long-context evaluationLongBench
Average Score100
57
Long-context language evaluationLongBench v1 (test)
NrtQA Score19.6
31
Long-context UnderstandingRULER 64k
Accuracy69.45
25
Long-context UnderstandingRULER 128k
Accuracy54.83
15
Inference EfficiencyLLaMA 8B 8K context length 3.1
Theoretical Compute (TFLOPs)137
10
Efficiency AnalysisContext Length 4K
Theoretical Compute (TFLOPs)60
5
Showing 10 of 13 rows

Other info

GitHub

Follow for update