PQCache: Product Quantization-based KVCache for Long Context LLM Inference
About
As the field of Large Language Models (LLMs) continues to evolve, the context length in inference is steadily growing. Key-Value Cache (KVCache), the intermediate representations of tokens within LLM inference, has now become the primary memory bottleneck due to limited GPU memory. Current methods selectively determine suitable keys and values for self-attention computation in LLMs to address the issue. However, they either fall short in maintaining model quality or result in high serving latency. Drawing inspiration from advanced embedding retrieval techniques prevalent in the data management community, we consider the storage and retrieval of KVCache as a typical embedding retrieval problem. We propose PQCache, which employs Product Quantization (PQ) to manage KVCache, maintaining model quality while ensuring low serving latency. During the prefilling phase, we apply PQ to tokens' keys for each LLM layer and head. During the autoregressive decoding phase, we use PQ codes and centroids to approximately identify important preceding tokens, then fetch the corresponding key-value pairs for self-attention computation. Through meticulous design of overlapping and caching, we minimize any additional computation and communication overhead during both phases. Extensive experiments demonstrate that PQCache achieves both effectiveness and efficiency, with 4.60% score improvement over existing methods on InfiniteBench and low system latency in both prefilling and decoding.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Long-context Understanding | LongBench (test) | Avg Score51.7 | 80 | |
| Long-context Language Understanding | InfiniteBench | En.Sum18.9 | 63 | |
| Long-context Understanding | InfiniteBench v1 (test) | Dialogue15 | 31 | |
| Long-context Language Understanding | RULER 32k context length | Average Score86.9 | 30 | |
| Long-context evaluation | RULER 64k | VT Score52.8 | 29 | |
| Long-context evaluation | RULER 128k | Query Metric (MQ)5 | 29 | |
| Long-context Understanding | LongBench | 2WikiMQA48.63 | 25 | |
| Long-context Understanding | LongBench v1 (test) | SD QA48.4 | 21 | |
| Long-context Language Understanding | LongBench v2 | Overall Accuracy25.5 | 20 | |
| Decoding Latency | Synthetic Context Sequences (test) | Latency (16k Context)0.085 | 16 |