Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Loki: Low-rank Keys for Efficient Sparse Attention

About

Inference on large language models (LLMs) can be expensive in terms of the compute and memory costs involved, especially when long sequence lengths are used. In particular, the self-attention mechanism used in LLM inference contributes significantly to these costs, which has sparked an interest in approximating the self-attention computation to reduce such costs. In this work, we propose to approximate self-attention by focusing on the dimensionality of key vectors computed in the attention block. Our analysis reveals that key vectors lie in a significantly lower-dimensional space, consistently across several datasets and models. Exploiting this observation, we propose Loki, a novel sparse attention method that ranks and selects tokens in the KV-cache based on attention scores computed in low-dimensional space. Our evaluations show that Loki is able to speed up the attention computation due to reduced data movement (load/store) and compute costs while maintaining the efficacy of the models better than other popular approximation methods.

Prajwal Singhania, Siddharth Singh, Shwai He, Soheil Feizi, Abhinav Bhatele• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-2--
841
Long-context UnderstandingLongBench
Accuracy87.2
60
Long-context evaluationRULER 16k
Total Score54.67
59
Long-context evaluationRULER 32k
Overall Score39.92
41
Long-context evaluationRULER 4k
Score84.52
35
Long-context evaluationRULER 8k
Score65.36
35
Mathematical ReasoningMATH 500
Flex Match90.5
27
Long-context Language UnderstandingLongBench-e (test)
LCC (Language Comprehension Score)61.29
16
Long-context evaluationRULER 32K context length (test)
Niah1 Score25
12
RetrievalRULER 128K context--
12
Showing 10 of 10 rows

Other info

Follow for update