Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

PyramidKV: Dynamic KV Cache Compression based on Pyramidal Information Funneling

About

In this study, we investigate whether attention-based information flow inside large language models (LLMs) is aggregated through noticeable patterns for long context processing. Our observations reveal that LLMs aggregate information through Pyramidal Information Funneling where attention is scattering widely in lower layers, progressively consolidating within specific contexts, and ultimately focusing on critical tokens (a.k.a massive activation or attention sink) in higher layers. Motivated by these insights, we developed PyramidKV, a novel and effective KV cache compression method. This approach dynamically adjusts the KV cache size across different layers, allocating more cache in lower layers and less in higher ones, diverging from traditional methods that maintain a uniform KV cache size. Our experimental evaluations, utilizing the LongBench benchmark, show that PyramidKV matches the performance of models with a full KV cache while retaining only 12% of the KV cache, thus significantly reducing memory usage. In scenarios emphasizing memory efficiency, where only 0.7% of the KV cache is maintained, PyramidKV surpasses other KV cache compression techniques, achieving up to a 20.5 absolute accuracy improvement on TREC dataset. In the Needle-in-a-Haystack experiment, PyramidKV outperforms competing methods in maintaining long-context comprehension in LLMs; notably, retaining just 128 KV cache entries enables the LLAMA-3-70B model to achieve 100.0 Acc. performance.

Zefan Cai, Yichi Zhang, Bofei Gao, Yuliang Liu, Yucheng Li, Tianyu Liu, Keming Lu, Wayne Xiong, Yue Dong, Junjie Hu, Wen Xiao• 2024

Related benchmarks

TaskDatasetResultRank
Multi-turn Dialogue EvaluationMT-Bench
Overall Score8.42
447
Long-context Language UnderstandingLongBench
M-Avg45.29
292
Mathematical ReasoningMathVista
Accuracy21.1
257
Long-context language modelingLongBench
Average Score39.39
164
Long-context Language UnderstandingLongBench (test)
Average Score48.51
147
Long-context UnderstandingLongBench (test)
Avg Score48.78
136
Long-context UnderstandingLongBench
Overall Average Score49.5
115
Single-Doc Question AnsweringLongBench
MultifieldQA Score47.81
75
Long-context language modeling evaluationRULER Context Length = 8K
Average Accuracy (RULER 8K)80.99
72
Long-context Question AnsweringLongBench (test)
HotpotQA31.22
69
Showing 10 of 40 rows

Other info

Follow for update