Loki: Low-rank Keys for Efficient Sparse Attention
About
Inference on large language models (LLMs) can be expensive in terms of the compute and memory costs involved, especially when long sequence lengths are used. In particular, the self-attention mechanism used in LLM inference contributes significantly to these costs, which has sparked an interest in approximating the self-attention computation to reduce such costs. In this work, we propose to approximate self-attention by focusing on the dimensionality of key vectors computed in the attention block. Our analysis reveals that key vectors lie in a significantly lower-dimensional space, consistently across several datasets and models. Exploiting this observation, we propose Loki, a novel sparse attention method that ranks and selects tokens in the KV-cache based on attention scores computed in low-dimensional space. Our evaluations show that Loki is able to speed up the attention computation due to reduced data movement (load/store) and compute costs while maintaining the efficacy of the models better than other popular approximation methods.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Language Modeling | WikiText-2 | -- | 841 | |
| Long-context Understanding | LongBench | Accuracy87.2 | 60 | |
| Long-context evaluation | RULER 16k | Total Score54.67 | 59 | |
| Long-context evaluation | RULER 32k | Overall Score39.92 | 41 | |
| Long-context evaluation | RULER 4k | Score84.52 | 35 | |
| Long-context evaluation | RULER 8k | Score65.36 | 35 | |
| Mathematical Reasoning | MATH 500 | Flex Match90.5 | 27 | |
| Long-context Language Understanding | LongBench-e (test) | LCC (Language Comprehension Score)61.29 | 16 | |
| Long-context evaluation | RULER 32K context length (test) | Niah1 Score25 | 12 | |
| Retrieval | RULER 128K context | -- | 12 |