Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Listen, Denoise, Action! Audio-Driven Motion Synthesis with Diffusion Models

About

Diffusion models have experienced a surge of interest as highly expressive yet efficiently trainable probabilistic models. We show that these models are an excellent fit for synthesising human motion that co-occurs with audio, e.g., dancing and co-speech gesticulation, since motion is complex and highly ambiguous given audio, calling for a probabilistic description. Specifically, we adapt the DiffWave architecture to model 3D pose sequences, putting Conformers in place of dilated convolutions for improved modelling power. We also demonstrate control over motion style, using classifier-free guidance to adjust the strength of the stylistic expression. Experiments on gesture and dance generation confirm that the proposed method achieves top-of-the-line motion quality, with distinctive styles whose expression can be made more or less pronounced. We also synthesise path-driven locomotion using the same model architecture. Finally, we generalise the guidance procedure to obtain product-of-expert ensembles of diffusion models and demonstrate how these may be used for, e.g., style interpolation, a contribution we believe is of independent interest. See https://www.speech.kth.se/research/listen-denoise-action/ for video examples, data, and code.

Simon Alexanderson, Rajmund Nagy, Jonas Beskow, Gustav Eje Henter• 2022

Related benchmarks

TaskDatasetResultRank
Speech-driven Holistic Expression and Gesture GenerationBEAT 2022 (test)
FMD688.3
9
Gesture GenerationPhotoreal (test)
Beat Alignment73.2
7
Speech-driven gesture generationBEAT (test)--
7
Lip synchronizationBEAT
MSE0.1642
5
Showing 4 of 4 rows

Other info

Follow for update