Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

AsyncTLS: Efficient Generative LLM Inference with Asynchronous Two-level Sparse Attention

About

Long-context inference in LLMs faces the dual challenges of quadratic attention complexity and prohibitive KV cache memory. While token-level sparse attention offers superior accuracy, its indexing overhead is costly; block-level methods improve efficiency but sacrifice precision. We propose AsyncTLS, a hierarchical sparse attention system that combines coarse-grained block filtering with fine-grained token selection to balance accuracy and efficiency, coupled with an asynchronous offloading engine that overlaps KV cache transfers with computation via temporal locality exploitation. Evaluated on Qwen3 and GLM-4.7-Flash across GQA, and MLA architectures, AsyncTLS achieves accuracy comparable to full attention while delivering 1.2x - 10.0x operator speedups and 1.3x - 4.7x end-to-end throughput improvements on 48k - 96k contexts.

Yuxuan Hu, Jianchao Tan, Jiaqi Zhang, Wen Zan, Pingwei Sun, Yifan Lu, Yerui Sun, Yuchen Xie, Xunliang Cai, Jing Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Long-context UnderstandingLongBench
NQA Score27.24
12
In-context retrievalRULER (test)
S1 Score100
8
In-context retrievalRULER
MQ Score95.1
4
Showing 3 of 3 rows

Other info

Follow for update