Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation

About

We propose the first joint audio-video generation framework that brings engaging watching and listening experiences simultaneously, towards high-quality realistic videos. To generate joint audio-video pairs, we propose a novel Multi-Modal Diffusion model (i.e., MM-Diffusion), with two-coupled denoising autoencoders. In contrast to existing single-modal diffusion models, MM-Diffusion consists of a sequential multi-modal U-Net for a joint denoising process by design. Two subnets for audio and video learn to gradually generate aligned audio-video pairs from Gaussian noises. To ensure semantic consistency across modalities, we propose a novel random-shift based attention block bridging over the two subnets, which enables efficient cross-modal alignment, and thus reinforces the audio-video fidelity for each other. Extensive experiments show superior results in unconditional audio-video generation, and zero-shot conditional tasks (e.g., video-to-audio). In particular, we achieve the best FVD and FAD on Landscape and AIST++ dancing datasets. Turing tests of 10k votes further demonstrate dominant preferences for our model. The code and pre-trained models can be downloaded at https://github.com/researchmm/MM-Diffusion.

Ludan Ruan, Yiyang Ma, Huan Yang, Huiguo He, Bei Liu, Jianlong Fu, Nicholas Jing Yuan, Qin Jin, Baining Guo• 2022

Related benchmarks

TaskDatasetResultRank
Co-Speech Gesture Video GenerationPATS (test)
Diversity5.189
22
Joint audio-video generationJavisBench 1.0 (test)
AV-IB0.119
18
Joint Video-Audio GenerationLandscape (test)
FVD238.3
9
Audio-to-video generation (A2V)AIST++ (test)
FVD184.4
6
Text-to-Audio-Video GenerationJavisBench mini (test)
FVD2.31e+3
5
Video-to-audio generation (V2A)AIST++ (test)
FAD13.3
2
Video-to-audio generation (V2A)Landscape (test)
FAD13.6
2
Showing 7 of 7 rows

Other info

Follow for update