Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Layer-Condensed KV Cache for Efficient Inference of Large Language Models

About

Huge memory consumption has been a major bottleneck for deploying high-throughput large language models in real-world applications. In addition to the large number of parameters, the key-value (KV) cache for the attention mechanism in the transformer architecture consumes a significant amount of memory, especially when the number of layers is large for deep language models. In this paper, we propose a novel method that only computes and caches the KVs of a small number of layers, thus significantly saving memory consumption and improving inference throughput. Our experiments on large language models show that our method achieves up to 26$\times$ higher throughput than standard transformers and competitive performance in language modeling and downstream tasks. In addition, our method is orthogonal to existing transformer memory-saving techniques, so it is straightforward to integrate them with our model, achieving further improvement in inference efficiency. Our code is available at https://github.com/whyNLP/LCKV.

Haoyi Wu, Kewei Tu• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningCommonsense Reasoning (BoolQ, PIQA, HellaSwag, Winogrande) zero-shot
Avg Commonsense Accuracy46.84
34
LLM GenerationRTX 3090 24GB (inference)
Max Batch Size1.15e+3
24
LLM GenerationA100 80GB (inference)
Maximum Batch Size128
6
Language ModelingSlimPajama 10M (dev)
Perplexity9.265
3
Showing 4 of 4 rows

Other info

Code

Follow for update