Layer-Condensed KV Cache for Efficient Inference of Large Language Models
About
Huge memory consumption has been a major bottleneck for deploying high-throughput large language models in real-world applications. In addition to the large number of parameters, the key-value (KV) cache for the attention mechanism in the transformer architecture consumes a significant amount of memory, especially when the number of layers is large for deep language models. In this paper, we propose a novel method that only computes and caches the KVs of a small number of layers, thus significantly saving memory consumption and improving inference throughput. Our experiments on large language models show that our method achieves up to 26$\times$ higher throughput than standard transformers and competitive performance in language modeling and downstream tasks. In addition, our method is orthogonal to existing transformer memory-saving techniques, so it is straightforward to integrate them with our model, achieving further improvement in inference efficiency. Our code is available at https://github.com/whyNLP/LCKV.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Commonsense Reasoning | Commonsense Reasoning (BoolQ, PIQA, HellaSwag, Winogrande) zero-shot | Avg Commonsense Accuracy46.84 | 34 | |
| LLM Generation | RTX 3090 24GB (inference) | Max Batch Size1.15e+3 | 24 | |
| LLM Generation | A100 80GB (inference) | Maximum Batch Size128 | 6 | |
| Language Modeling | SlimPajama 10M (dev) | Perplexity9.265 | 3 |