Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Scaling Diffusion Mamba with Bidirectional SSMs for Efficient Image and Video Generation

About

In recent developments, the Mamba architecture, known for its selective state space approach, has shown potential in the efficient modeling of long sequences. However, its application in image generation remains underexplored. Traditional diffusion transformers (DiT), which utilize self-attention blocks, are effective but their computational complexity scales quadratically with the input length, limiting their use for high-resolution images. To address this challenge, we introduce a novel diffusion architecture, Diffusion Mamba (DiM), which foregoes traditional attention mechanisms in favor of a scalable alternative. By harnessing the inherent efficiency of the Mamba architecture, DiM achieves rapid inference times and reduced computational load, maintaining linear complexity with respect to sequence length. Our architecture not only scales effectively but also outperforms existing diffusion transformers in both image and video generation tasks. The results affirm the scalability and efficiency of DiM, establishing a new benchmark for image and video generation techniques. This work advances the field of generative models and paves the way for further applications of scalable architectures.

Shentong Mo, Yapeng Tian• 2024

Related benchmarks

TaskDatasetResultRank
Unconditional video generationUCF-101 256x256
FVD (256x256, 2048)358.8
12
Showing 1 of 1 rows

Other info

Follow for update