Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ProxyAttn: Guided Sparse Attention via Representative Heads

About

The quadratic complexity of attention mechanisms limits the efficiency of Large Language Models (LLMs) on long-text tasks. Recently, methods that dynamically estimate block importance have enabled efficient block sparse attention, leading to significant acceleration in long-text pre-filling of LLMs. However, their coarse-grained estimation inevitably leads to performance degradation at high sparsity rates. In this work, we propose ProxyAttn, a training-free sparse attention algorithm that achieves more precise block estimation by compressing the dimension of attention heads. Based on our observation of the similarity among multiple attention heads, we use the scores of pooled representative heads to approximate the scores for all heads. To account for the varying sparsity among heads, we also propose a block-aware dynamic budget estimation method. By combining the scores from representative proxy heads with multi-head dynamic budgets, we achieve a more fine-grained block importance evaluation at low computational cost. Experiments on a variety of mainstream models and extensive benchmarks confirm the underlying similarity among attention heads. Leveraging a fine-grained estimation, the proposed method achieves substantial gains in performance and efficiency compared to existing methods. More precisely, ProxyAttn can achieve up to 10.3x attention acceleration and 2.4x prefilling acceleration without significant performance loss. Our code is available at https://github.com/wyxstriker/ProxyAttn.

Yixuan Wang, Huang He, Siqi Bao, Hua Wu, Haifeng Wang, Qingfu Zhu, Wanxiang Che• 2025

Related benchmarks

TaskDatasetResultRank
Long-context UnderstandingLongBench v2
Overall Score38.33
109
Long-context Language UnderstandingLongBench
Average Score50.5
86
Information RetrievalNIAH (test)
Average Score97.8
59
Long-context UnderstandingInfiniteBench
Math Score (F)0.4171
22
Long-context language modelingRULER (test)
Sparsity80
13
Long-context evaluationRULER 128K sequences Llama3.1-70B-Instruct
RULER Score62.23
4
Showing 6 of 6 rows

Other info

Follow for update