Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Radial Attention: $O(n\log n)$ Sparse Attention with Energy Decay for Long Video Generation

About

Recent advances in diffusion models have enabled high-quality video generation, but the additional temporal dimension significantly increases computational costs, making training and inference on long videos prohibitively expensive. In this paper, we identify a phenomenon we term Spatiotemporal Energy Decay in video diffusion models: post-softmax attention scores diminish as spatial and temporal distance between tokens increase, akin to the physical decay of signal or waves over space and time in nature. Motivated by this, we propose Radial Attention, a scalable sparse attention mechanism with $\mathcal{O}(n \log n)$ complexity that translates energy decay into exponentially decaying compute density, which is significantly more efficient than standard $\mathcal{O}(n^2)$ dense attention and more expressive than linear attention. Specifically, Radial Attention employs a simple, static attention mask where each token attends to spatially nearby tokens, with the attention window size shrinking with temporal distance. Moreover, it allows pre-trained video diffusion models to extend their generation length with efficient LoRA-based fine-tuning. Extensive experiments show that Radial Attention maintains video quality across Wan2.1-14B, HunyuanVideo, and Mochi 1, achieving up to a 1.9$\times$ speedup over the original dense attention. With minimal tuning, it enables video generation up to 4$\times$ longer while reducing training costs by up to 4.4$\times$ compared to direct fine-tuning and accelerating inference by up to 3.7$\times$ compared to dense attention inference. Code is released at \href{https://github.com/mit-han-lab/radial-attention}{https://github.com/mit-han-lab/radial-attention}.

Xingyang Li, Muyang Li, Tianle Cai, Haocheng Xi, Shuo Yang, Yujun Lin, Lvmin Zhang, Songlin Yang, Jinbo Hu, Kelly Peng, Maneesh Agrawala, Ion Stoica, Kurt Keutzer, Song Han• 2025

Related benchmarks

TaskDatasetResultRank
Rolling-ForcingLongVBench
VBench Score61.51
15
Video GenerationLongVGenBench LongVie2 (test)
LongVGenBench Score41.11
15
Video GenerationVBench v1 (test)
Latency (s)7.39
13
Video GenerationVBench Wan2.1
Sparsity73.6
7
Video GenerationWan2.1-14B 69 frames (test)
Vision Reward0.128
7
Video GenerationHunyuanVideo 117 frames (test)
Vision Reward0.139
7
Video GenerationVBench CogVideoX v1.5
Sparsity70.7
6
Video GenerationVBench Hunyuan Video
Sparsity76.3
6
Video GenerationWan2.1-1.3B 4-step distilled
VBench0.727
6
Video GenerationVBench training-free evaluation
Quality Score0.841
5
Showing 10 of 11 rows

Other info

Follow for update