Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Spava: Accelerating Long-Video Understanding via Sequence-Parallelism-aware Approximate Attention

About

The efficiency of long-video inference remains a critical bottleneck, mainly due to the dense computation in the prefill stage of Large Multimodal Models (LMMs). Existing methods either compress visual embeddings or apply sparse attention on a single GPU, yielding limited acceleration or degraded performance and restricting LMMs from handling longer, more complex videos. To overcome these issues, we propose Spava, a sequence-parallel framework with optimized attention that accelerates long-video inference across multiple GPUs. By distributing approximate attention, Spava reduces computation and increases parallelism, enabling efficient processing of more visual embeddings without compression and thereby improving task performance. System-level optimizations, such as load balancing and fused forward passes, further unleash the potential of Spava, delivering speedups of 12.72x, 1.70x, and 1.18x over FlashAttn, ZigZagRing, and APB, without notable performance loss. Code available at https://github.com/thunlp/APB

Yuxiang Huang, Mingye Li, Xu Han, Chaojun Xiao, Weilin Zhao, Ao Sun, Ziqi Yuan, Hao Zhou, Fandong Meng, Zhiyuan Liu• 2026

Related benchmarks

TaskDatasetResultRank
Video UnderstandingLongVideoBench (test)
Accuracy (8-15s)77.78
21
Long Video UnderstandingVNBench
Retrieval E Accuracy90.67
21
Showing 2 of 2 rows

Other info

Follow for update