Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PQCache: Product Quantization-based KVCache for Long Context LLM Inference

About

As the field of Large Language Models (LLMs) continues to evolve, the context length in inference is steadily growing. Key-Value Cache (KVCache), the intermediate representations of tokens within LLM inference, has now become the primary memory bottleneck due to limited GPU memory. Current methods selectively determine suitable keys and values for self-attention computation in LLMs to address the issue. However, they either fall short in maintaining model quality or result in high serving latency. Drawing inspiration from advanced embedding retrieval techniques prevalent in the data management community, we consider the storage and retrieval of KVCache as a typical embedding retrieval problem. We propose PQCache, which employs Product Quantization (PQ) to manage KVCache, maintaining model quality while ensuring low serving latency. During the prefilling phase, we apply PQ to tokens' keys for each LLM layer and head. During the autoregressive decoding phase, we use PQ codes and centroids to approximately identify important preceding tokens, then fetch the corresponding key-value pairs for self-attention computation. Through meticulous design of overlapping and caching, we minimize any additional computation and communication overhead during both phases. Extensive experiments demonstrate that PQCache achieves both effectiveness and efficiency, with 4.60% score improvement over existing methods on InfiniteBench and low system latency in both prefilling and decoding.

Hailin Zhang, Xiaodong Ji, Yilin Chen, Fangcheng Fu, Xupeng Miao, Xiaonan Nie, Weipeng Chen, Bin Cui• 2024

Related benchmarks

TaskDatasetResultRank
Long-context UnderstandingLongBench (test)
Avg Score51.7
80
Long-context Language UnderstandingInfiniteBench
En.Sum18.9
63
Long-context UnderstandingInfiniteBench v1 (test)
Dialogue15
31
Long-context Language UnderstandingRULER 32k context length
Average Score86.9
30
Long-context evaluationRULER 64k
VT Score52.8
29
Long-context evaluationRULER 128k
Query Metric (MQ)5
29
Long-context UnderstandingLongBench
2WikiMQA48.63
25
Long-context UnderstandingLongBench v1 (test)
SD QA48.4
21
Long-context Language UnderstandingLongBench v2
Overall Accuracy25.5
20
Decoding LatencySynthetic Context Sequences (test)
Latency (16k Context)0.085
16
Showing 10 of 21 rows

Other info

Follow for update