Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Causality in Video Diffusers is Separable from Denoising

About

Causality -- referring to temporal, uni-directional cause-effect relationships between components -- underlies many complex generative processes, including videos, language, and robot trajectories. Current causal diffusion models entangle temporal reasoning with iterative denoising, applying causal attention across all layers, at every denoising step, and over the entire context. In this paper, we show that the causal reasoning in these models is separable from the multi-step denoising process. Through systematic probing of autoregressive video diffusers, we uncover two key regularities: (1) early layers produce highly similar features across denoising steps, indicating redundant computation along the diffusion trajectory; and (2) deeper layers exhibit sparse cross-frame attention and primarily perform intra-frame rendering. Motivated by these findings, we introduce Separable Causal Diffusion (SCD), a new architecture that explicitly decouples once-per-frame temporal reasoning, via a causal transformer encoder, from multi-step frame-wise rendering, via a lightweight diffusion decoder. Extensive experiments on both pretraining and post-training tasks across synthetic and real benchmarks show that SCD significantly improves throughput and per-frame latency while matching or surpassing the generation quality of strong causal diffusion baselines.

Xingjian Bai, Guande He, Zhengqi Li, Eli Shechtman, Xun Huang, Zongze Wu• 2026

Related benchmarks

TaskDatasetResultRank
Text-to-Video GenerationVBench
Quality Score85.14
111
Video GenerationUCF-101 64 x 64 (test)
FVD158.7
12
Video GenerationTECO–Minecraft 128x128
LPIPS0.168
6
Unconditional GenerationRealEstate10K unconditional 256x256
LPIPS0.135
4
Showing 4 of 4 rows

Other info

Follow for update