Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Unified Sparse Attention via Multi-Granularity Compression

About

Efficient long-context understanding and reasoning are increasingly vital for large language model (LLM) applications such as multi-turn dialogue and program analysis. However, the core self-attention mechanism scales quadratically with sequence length, creating a fundamental computational bottleneck. Existing sparse attention methods alleviate this issue but face trade-offs: training-based methods are costly and cannot be directly applied as acceleration plugins for other models, while inference-time methods often compromise efficiency or cross-modal generality. To address these limitations, we present UniSparse, a unified mechanism that introduces the notion of composite tokens--compact representations that aggregate multi-granularity contextual information. Building on this abstraction, UniSparse dynamically constructs sparse attention through multi-granularity compression and block-level selection, enabling efficient and hardware-friendly execution on GPU. Across multiple modalities and tasks ranging from synthetic benchmarks to real-world applications, UniSparse consistently surpasses state-of-the-art sparse attention methods (e.g., MInference, XAttention, FlexPrefill) in both accuracy and efficiency, achieving $\ge$ 99% of full-attention accuracy and up to 2.61$\times$ faster attention computation than FlashAttention.

Siran Liu, Zane Cao, Yongchao He• 2025

Related benchmarks

TaskDatasetResultRank
Video UnderstandingVideo-MME without subtitles
Overall Score65
67
Long-context UnderstandingRULER
Performance @ 4K Context97.38
65
Long-context UnderstandingHELMET 2025
Accuracy (8K Context)61.37
16
Video UnderstandingVideo-MME With Subtitles
Performance (Short)75.3
14
Showing 4 of 4 rows

Other info

Follow for update