Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Diffusion Models for Joint Audio-Video Generation

About

Multimodal generative models have shown remarkable progress in single-modality video and audio synthesis, yet truly joint audio-video generation remains an open challenge. In this paper, I explore four key contributions to advance this field. First, I release two high-quality, paired audio-video datasets. The datasets consisting on 13 hours of video-game clips and 64 hours of concert performances, each segmented into consistent 34-second samples to facilitate reproducible research. Second, I train the MM-Diffusion architecture from scratch on our datasets, demonstrating its ability to produce semantically coherent audio-video pairs and quantitatively evaluating alignment on rapid actions and musical cues. Third, I investigate joint latent diffusion by leveraging pretrained video and audio encoder-decoders, uncovering challenges and inconsistencies in the multimodal decoding stage. Finally, I propose a sequential two-step text-to-audio-video generation pipeline: first generating video, then conditioning on both the video output and the original prompt to synthesize temporally synchronized audio. My experiments show that this modular approach yields high-fidelity generations of audio video generation.

Alejandro Paredes La Torre• 2026

Related benchmarks

TaskDatasetResultRank
Text-to-Video GenerationConcerts dataset
FAD5.02e+3
1
Joint audio-video generationAIST++--
1
Unconditional Joint Audio-Video GenerationConcerts dataset--
1
Showing 3 of 3 rows

Other info

Follow for update