Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Diff-TTSG: Denoising probabilistic integrated speech and gesture synthesis

About

With read-aloud speech synthesis achieving high naturalness scores, there is a growing research interest in synthesising spontaneous speech. However, human spontaneous face-to-face conversation has both spoken and non-verbal aspects (here, co-speech gestures). Only recently has research begun to explore the benefits of jointly synthesising these two modalities in a single system. The previous state of the art used non-probabilistic methods, which fail to capture the variability of human speech and motion, and risk producing oversmoothing artefacts and sub-optimal synthesis quality. We present the first diffusion-based probabilistic model, called Diff-TTSG, that jointly learns to synthesise speech and gestures together. Our method can be trained on small datasets from scratch. Furthermore, we describe a set of careful uni- and multi-modal subjective tests for evaluating integrated speech and gesture synthesis systems, and use them to validate our proposed approach. Please see https://shivammehta25.github.io/Diff-TTSG/ for video examples, data, and code.

Shivam Mehta, Siyang Wang, Simon Alexanderson, Jonas Beskow, \'Eva Sz\'ekely, Gustav Eje Henter• 2023

Related benchmarks

TaskDatasetResultRank
Speech SynthesisSpeech and 3D gesture (test)
Speech MOS3.27
6
Co-speech Gesture and Speech SynthesisTrinity Speech-Gesture Dataset II (test)
WER12.42
5
Gesture Motion SynthesisSpeech and 3D gesture (test)
Motion MOS3.11
5
Multimodal AppropriatenessSpeech and 3D gesture (test)
MAS0.31
5
Showing 4 of 4 rows

Other info

Follow for update