Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

KV Cache is 1 Bit Per Channel: Efficient Large Language Model Inference with Coupled Quantization

About

Efficient deployment of Large Language Models (LLMs) requires batching multiple requests together to improve throughput. As the batch size, context length, or model size increases, the size of the key and value (KV) cache can quickly become the main contributor to GPU memory usage and the bottleneck of inference latency. Quantization has emerged as an effective technique for KV cache compression, but existing methods still fail at very low bit widths. We observe that distinct channels of a key/value activation embedding are highly inter-dependent, and the joint entropy of multiple channels grows at a slower rate than the sum of their marginal entropies. Based on this insight, we propose Coupled Quantization (CQ), which couples multiple key/value channels together to exploit their inter-dependency and encode the activations in a more information-efficient manner. Extensive experiments reveal that CQ outperforms or is competitive with existing baselines in preserving model quality. Furthermore, we demonstrate that CQ can preserve model quality with KV cache quantized down to 1-bit.

Tianyi Zhang, Jonah Yi, Zhaozhuo Xu, Anshumali Shrivastava• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-2 (test)
PPL4.59
1541
Language ModelingC4 (test)
Perplexity5.74
268
Physical Commonsense ReasoningPIQA (val)
Accuracy80.52
113
Commonsense ReasoningWinoGrande (val)
Accuracy73.48
87
Question AnsweringARC Challenge (val)
Accuracy49.15
72
Long-context language modelingLongBench (test)
Qasper Score9.58
5
Showing 6 of 6 rows

Other info

Follow for update