Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Attention Score is not All You Need for Token Importance Indicator in KV Cache Reduction: Value Also Matters

About

Scaling the context size of large language models (LLMs) enables them to perform various new tasks, e.g., book summarization. However, the memory cost of the Key and Value (KV) cache in attention significantly limits the practical applications of LLMs. Recent works have explored token pruning for KV cache reduction in LLMs, relying solely on attention scores as a token importance indicator. However, our investigation into value vector norms revealed a notably non-uniform pattern questioning their reliance only on attention scores. Inspired by this, we propose a new method: Value-Aware Token Pruning (VATP) which uses both attention scores and the $ \ell_{1} $ norm of value vectors to evaluate token importance. Extensive experiments on LLaMA2-7B-chat and Vicuna-v1.5-7B across 16 LongBench tasks demonstrate that VATP outperforms attention-score-only baselines in over 12 tasks, confirming the effectiveness of incorporating value vector norms into token importance evaluation of LLMs.

Zhiyu Guo, Hidetaka Kamigaito, Taro Watanabe• 2024

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy96.43
1362
Mathematical ReasoningMATH
Accuracy91
882
Mathematical ReasoningAMC
Accuracy (ACC)82.5
203
Mathematical ReasoningOlympiad
Accuracy55.41
137
Mathematical ReasoningMinerva
Accuracy (%)34.19
67
Showing 5 of 5 rows

Other info

Follow for update