Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MagicPIG: LSH Sampling for Efficient LLM Generation

About

Large language models (LLMs) with long context windows have gained significant attention. However, the KV cache, stored to avoid re-computation, becomes a bottleneck. Various dynamic sparse or TopK-based attention approximation methods have been proposed to leverage the common insight that attention is sparse. In this paper, we first show that TopK attention itself suffers from quality degradation in certain downstream tasks because attention is not always as sparse as expected. Rather than selecting the keys and values with the highest attention scores, sampling with theoretical guarantees can provide a better estimation for attention output. To make the sampling-based approximation practical in LLM generation, we propose MagicPIG, a heterogeneous system based on Locality Sensitive Hashing (LSH). MagicPIG significantly reduces the workload of attention computation while preserving high accuracy for diverse tasks. MagicPIG stores the LSH hash tables and runs the attention computation on the CPU, which allows it to serve longer contexts and larger batch sizes with high approximation accuracy. MagicPIG can improve decoding throughput by up to $5\times$ across various GPU hardware and achieve 54ms decoding latency on a single RTX 4090 for Llama-3.1-8B-Instruct model with a context of 96k tokens. The code is available at https://github.com/Infini-AI-Lab/MagicPIG.

Zhuoming Chen, Ranajoy Sadhukhan, Zihao Ye, Yang Zhou, Jianyu Zhang, Niklas Nolte, Yuandong Tian, Matthijs Douze, Leon Bottou, Zhihao Jia, Beidi Chen• 2024

Related benchmarks

TaskDatasetResultRank
Long-context ReasoningBABILong 16k
Accuracy28.4
72
Long-context language modelingRULER 16k context
Accuracy (RULER 16K)77.5
72
Long-context language modeling evaluationRULER Context Length = 8K
Average Accuracy (RULER 8K)81.7
72
Long-context ReasoningBABILong 8K
Accuracy33
65
Long-context ReasoningBABILong 4K
Accuracy (BABILong 4k)35.6
51
Long-context language modelingRULER
Accuracy84.6
51
Long-context Language UnderstandingLongBench v2
Overall Accuracy16.7
47
Long-context language modelingLongBench-E 1.0 (test)
S-Doc QA Perf.26.67
37
Long-context Language UnderstandingRULER 32k context length
VT Score82.8
33
Long-context Language UnderstandingLongBench v1 (test)
NrtvQA Score25.5
22
Showing 10 of 19 rows

Other info

Follow for update