Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization

About

LLMs are seeing growing use for applications which require large context windows, and with these large context windows KV cache activations surface as the dominant contributor to memory consumption during inference. Quantization is a promising approach for compressing KV cache activations; however, existing solutions fail to represent activations accurately in sub-4-bit precision. Our work, KVQuant, facilitates low precision KV cache quantization by incorporating several novel methods: (i) Per-Channel Key Quantization, where we adjust the dimension along which we quantize the Key activations to better match the distribution; (ii) Pre-RoPE Key Quantization, where we quantize Key activations before the rotary positional embedding to mitigate its impact on quantization; (iii) Non-Uniform KV Cache Quantization, where we derive per-layer sensitivity-weighted non-uniform datatypes that better represent the distributions; and (iv) Per-Vector Dense-and-Sparse Quantization, where we isolate outliers separately for each vector to minimize skews in quantization ranges. By applying our method to the LLaMA, Llama-2, Llama-3, and Mistral models, we achieve < 0.1 perplexity degradation with 3-bit quantization on both Wikitext-2 and C4, outperforming existing approaches. Our method enables serving LLaMA-7B with a context length of up to 1 million on a single A100-80GB GPU and up to 10 million on an 8-GPU system. We develop custom CUDA kernels for KVQuant, showing that we can achieve up to ~1.7x speedups, compared to baseline fp16 matrix-vector multiplications, for the LLaMA-7B model.

Coleman Hooper, Sehoon Kim, Hiva Mohammadzadeh, Michael W. Mahoney, Yakun Sophia Shao, Kurt Keutzer, Amir Gholami• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-2 (test)
PPL2.6
1541
Language ModelingC4
Perplexity4.98
1182
Language ModelingWikiText-2--
841
Language ModelingC4 (test)
Perplexity5.72
268
Long-context Language UnderstandingLongBench
M-Avg31.21
219
Physical Commonsense ReasoningPIQA (val)
Accuracy80.74
113
Mathematical ReasoningMATH 500--
106
Commonsense ReasoningWinoGrande (val)
Accuracy73.88
87
Long-context UnderstandingLongBench (test)
Avg Score52.43
80
Question AnsweringARC Challenge (val)
Accuracy49.91
72
Showing 10 of 20 rows

Other info

Code

Follow for update