Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

H$_2$O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models

About

Large Language Models (LLMs), despite their recent impressive accomplishments, are notably cost-prohibitive to deploy, particularly for applications involving long-content generation, such as dialogue systems and story writing. Often, a large amount of transient state information, referred to as the KV cache, is stored in GPU memory in addition to model parameters, scaling linearly with the sequence length and batch size. In this paper, we introduce a novel approach for implementing the KV cache which significantly reduces its memory footprint. Our approach is based on the noteworthy observation that a small portion of tokens contributes most of the value when computing attention scores. We call these tokens Heavy Hitters (H$_2$). Through a comprehensive investigation, we find that (i) the emergence of H$_2$ is natural and strongly correlates with the frequent co-occurrence of tokens in the text, and (ii) removing them results in significant performance degradation. Based on these insights, we propose Heavy Hitter Oracle (H$_2$O), a KV cache eviction policy that dynamically retains a balance of recent and H$_2$ tokens. We formulate the KV cache eviction as a dynamic submodular problem and prove (under mild assumptions) a theoretical guarantee for our novel eviction algorithm which could help guide future work. We validate the accuracy of our algorithm with OPT, LLaMA, and GPT-NeoX across a wide range of tasks. Our implementation of H$_2$O with 20% heavy hitters improves the throughput over three leading inference systems DeepSpeed Zero-Inference, Hugging Face Accelerate, and FlexGen by up to 29$\times$, 29$\times$, and 3$\times$ on OPT-6.7B and OPT-30B. With the same batch size, H2O can reduce the latency by up to 1.9$\times$. The code is available at https://github.com/FMInference/H2O.

Zhenyu Zhang, Ying Sheng, Tianyi Zhou, Tianlong Chen, Lianmin Zheng, Ruisi Cai, Zhao Song, Yuandong Tian, Christopher R\'e, Clark Barrett, Zhangyang Wang, Beidi Chen• 2023

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2
Accuracy78.9
1165
Mathematical ReasoningGSM8K
Accuracy57
983
Visual Question AnsweringGQA
Accuracy62.3
963
Object Hallucination EvaluationPOPE
Accuracy87.3
935
Language ModelingWikiText-2--
841
Commonsense ReasoningPIQA
Accuracy79.22
647
Text-based Visual Question AnsweringTextVQA
Accuracy57.5
496
Question AnsweringOpenBookQA
Accuracy43.8
465
Mathematical ReasoningMATH500 (test)
Accuracy31
381
Multimodal UnderstandingMMBench--
367
Showing 10 of 109 rows
...

Other info

Code

Follow for update