Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Quantized GAN for Complex Music Generation from Dance Videos

About

We present Dance2Music-GAN (D2M-GAN), a novel adversarial multi-modal framework that generates complex musical samples conditioned on dance videos. Our proposed framework takes dance video frames and human body motions as input, and learns to generate music samples that plausibly accompany the corresponding input. Unlike most existing conditional music generation works that generate specific types of mono-instrumental sounds using symbolic audio representations (e.g., MIDI), and that usually rely on pre-defined musical synthesizers, in this work we generate dance music in complex styles (e.g., pop, breaking, etc.) by employing a Vector Quantized (VQ) audio representation, and leverage both its generality and high abstraction capacity of its symbolic and continuous counterparts. By performing an extensive set of experiments on multiple datasets, and following a comprehensive evaluation protocol, we assess the generative qualities of our proposal against alternatives. The attained quantitative results, which measure the music consistency, beats correspondence, and music diversity, demonstrate the effectiveness of our proposed method. Last but not least, we curate a challenging dance-music dataset of in-the-wild TikTok videos, which we use to further demonstrate the efficacy of our approach in real-world applications -- and which we hope to serve as a starting point for relevant future research.

Ye Zhu, Kyle Olszewski, Yu Wu, Panos Achlioptas, Menglei Chai, Yan Yan, Sergey Tulyakov• 2022

Related benchmarks

TaskDatasetResultRank
Dance-to-MusicAIST++
BCS95.6
17
Dance-to-MusicAIST++ (test)
BCS88.67
11
Dance-to-Music GenerationTiktok (test)
BCS83.22
4
Video-to-Music GenerationAIST++
BCS0.923
4
Video-to-Music GenerationLORIS
BCS95.6
4
Video-to-Music GenerationTikTok
BCS87.1
3
Showing 6 of 6 rows

Other info

Follow for update