Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

IndexCache: Accelerating Sparse Attention via Cross-Layer Index Reuse

About

Long-context agentic workflows have emerged as a defining use case for large language models, making attention efficiency critical for both inference speed and serving cost. Sparse attention addresses this challenge effectively, and DeepSeek Sparse Attention (DSA) is a representative production-grade solution: a lightweight lightning indexer selects the top-k most relevant tokens per query, reducing core attention from $O(L^2)$ to $O(Lk)$. However, the indexer itself retains $O(L^2)$ complexity and must run independently at every layer, despite the fact that the resulting top-k selections are highly similar across consecutive layers. We present IndexCache, which exploits this cross-layer redundancy by partitioning layers into a small set of Full layers that run their own indexers and a majority of Shared layers that simply reuse the nearest Full layer's top-k indices. We propose two complementary approaches to determine and optimize this configuration. Training-free IndexCache applies a greedy search algorithm that selects which layers to retain indexers by directly minimizing language modeling loss on a calibration set, requiring no weight updates. Training-aware IndexCache introduces a multi-layer distillation loss that trains each retained indexer against the averaged attention distributions of all layers it serves, enabling even simple interleaved patterns to match full-indexer accuracy. Experimental results on a 30B DSA model show that IndexCache can remove 75% of indexer computations with negligible quality degradation, achieving up to 1.82$\times$ prefill speedup and 1.48$\times$ decode speedup compared to standard DSA. These positive results are further confirmed by our preliminary experiments on the production-scale GLM-5 model (Figure 1).

Yushi Bai, Qian Dong, Ting Jiang, Xin Lv, Zhengxiao Du, Aohan Zeng, Jie Tang, Juanzi Li• 2026

Related benchmarks

TaskDatasetResultRank
Long-context language modelingLong-Context Evaluation Suite MRCR v2, GraphWalks, LongBench v2, RULER, AA-LCR
Average Score78.7
5
LLM Inference PerformanceContext Length 10K
Prefill Time (s)0.45
3
LLM Inference PerformanceContext Length 60K
Prefill Time (s)2.59
3
LLM Inference PerformanceContext Length 120K
Prefill Time (s)5.66
3
LLM Inference PerformanceContext Length 200K
Prefill Time (s)10.7
3
Showing 5 of 5 rows

Other info

GitHub

Follow for update