Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Speech Gesture Generation from the Trimodal Context of Text, Audio, and Speaker Identity

About

For human-like agents, including virtual avatars and social robots, making proper gestures while speaking is crucial in human--agent interaction. Co-speech gestures enhance interaction experiences and make the agents look alive. However, it is difficult to generate human-like gestures due to the lack of understanding of how people gesture. Data-driven approaches attempt to learn gesticulation skills from human demonstrations, but the ambiguous and individual nature of gestures hinders learning. In this paper, we present an automatic gesture generation model that uses the multimodal context of speech text, audio, and speaker identity to reliably generate gestures. By incorporating a multimodal context and an adversarial training scheme, the proposed model outputs gestures that are human-like and that match with speech content and rhythm. We also introduce a new quantitative evaluation metric for gesture generation models. Experiments with the introduced metric and subjective human evaluation showed that the proposed gesture generation model is better than existing end-to-end generation models. We further confirm that our model is able to work with synthesized audio in a scenario where contexts are constrained, and show that different gesture styles can be generated for the same speech by specifying different speaker identities in the style embedding space that is learned from videos of various speakers. All the code and data is available at https://github.com/ai4r/Gesture-Generation-from-Trimodal-Context.

Youngwoo Yoon, Bok Cha, Joo-Haeng Lee, Minsu Jang, Jaeyeon Lee, Jaehong Kim, Geehyuk Lee• 2020

Related benchmarks

TaskDatasetResultRank
Co-speech 3D Gesture SynthesisBEAT2 (test)
FGD12.41
27
Gesture GenerationBEAT-2 (test)
BC5.933
22
Co-Speech Gesture Video GenerationPATS (test)
Diversity3.02
22
Gesture GenerationBEAT2
FGD12.41
17
Co-speech motion generationBEATX (test)
FGD19.759
16
3D co-speech gesture generationTED-ETrans (test)
FGD_h+t21.06
14
3D co-speech gesture generationBEAT-ETrans (test)
FGD (h+t)14.09
14
Co-speech gesture generationBEATX Standard (test)
FGD19.759
11
Speech-driven gesture generationBEAT-X
FGD12.41
11
Co-speech gesture synthesisTED (test)
FGD3.729
9
Showing 10 of 29 rows

Other info

Code

Follow for update