Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SLA2: Sparse-Linear Attention with Learnable Routing and QAT

About

Sparse-Linear Attention (SLA) combines sparse and linear attention to accelerate diffusion models and has shown strong performance in video generation. However, (i) SLA relies on a heuristic split that assigns computations to the sparse or linear branch based on attention-weight magnitude, which can be suboptimal. Additionally, (ii) after formally analyzing the attention error in SLA, we identify a mismatch between SLA and a direct decomposition into sparse and linear attention. We propose SLA2, which introduces (I) a learnable router that dynamically selects whether each attention computation should use sparse or linear attention, (II) a more faithful and direct sparse-linear attention formulation that uses a learnable ratio to combine the sparse and linear attention branches, and (III) a sparse + low-bit attention design, where low-bit attention is introduced via quantization-aware fine-tuning to reduce quantization error. Experiments show that on video diffusion models, SLA2 can achieve 97% attention sparsity and deliver an 18.6x attention speedup while preserving generation quality.

Jintao Zhang, Haoxu Wang, Kai Jiang, Kaiwen Zheng, Youhe Jiang, Ion Stoica, Jianfei Chen, Jun Zhu, Joseph E. Gonzalez• 2026

Related benchmarks

TaskDatasetResultRank
Text-to-Video GenerationPrivate Video Dataset Wan2.1-T2V-1.3B-480P (test)
IQ67.7
10
Text-to-Video GenerationPrivate Video Dataset Wan2.1-T2V-14B-720P (test)
IQ69.63
10
Showing 2 of 2 rows

Other info

Follow for update