Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CSAttention: Centroid-Scoring Attention for Accelerating LLM Inference

About

Long-context LLMs increasingly rely on extended, reusable prefill prompts for agents and domain Q&A, pushing attention and KV-cache to become the dominant decode-time bottlenecks. While sparse attention reduces computation and transfer costs, it often struggles to maintain accuracy at high sparsity levels due to the inherent distribution shift between Queries and Keys. We propose Centroid-Scoring Attention (CSAttention), a training-free sparse attention method optimized for high-throughput serving of reusable contexts. CSAttention adopts a storage-for-computation strategy tailored to the offline-prefill/online-decode setting: it front-loads computation into a one-time offline prefill phase that can be amortized across multiple queries, while aggressively optimizing per-step decoding latency. Specifically, CSAttention constructs query-centric lookup tables during offline prefill, whose size remains fixed during decoding, and enables online decoding to replace full-context scans with efficient table lookups and GPU-friendly score accumulation. Extensive experiments demonstrate that CSAttention achieves near-identical accuracy to full attention. Under high sparsity (95%) and long-context settings (32K-128K), CSAttention consistently outperforms state-of-the-art sparse attention methods in both model accuracy and inference speed, achieving up to 4.6x inference speedup over the most accurate baseline at a context length of 128K.

Chuxu Song, Zhencan Peng, Jiuqi Wei, Chuanhui Yang• 2026

Related benchmarks

TaskDatasetResultRank
Long-context UnderstandingLongBench
MQA-E Score56.02
18
Long-context language modelingLongBench v2 (test)
Acc (Short)57
7
Long-context evaluationLongBench v2
Overall Score31.2
6
Showing 3 of 3 rows

Other info

Follow for update