Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Unified speech and gesture synthesis using flow matching

About

As text-to-speech technologies achieve remarkable naturalness in read-aloud tasks, there is growing interest in multimodal synthesis of verbal and non-verbal communicative behaviour, such as spontaneous speech and associated body gestures. This paper presents a novel, unified architecture for jointly synthesising speech acoustics and skeleton-based 3D gesture motion from text, trained using optimal-transport conditional flow matching (OT-CFM). The proposed architecture is simpler than the previous state of the art, has a smaller memory footprint, and can capture the joint distribution of speech and gestures, generating both modalities together in one single process. The new training regime, meanwhile, enables better synthesis quality in much fewer steps (network evaluations) than before. Uni- and multimodal subjective tests demonstrate improved speech naturalness, gesture human-likeness, and cross-modal appropriateness compared to existing benchmarks. Please see https://shivammehta25.github.io/Match-TTSG/ for video examples and code.

Shivam Mehta, Ruibo Tu, Simon Alexanderson, Jonas Beskow, \'Eva Sz\'ekely, Gustav Eje Henter• 2023

Related benchmarks

TaskDatasetResultRank
Speech SynthesisSpeech and 3D gesture (test)
Speech MOS3.7
6
Co-speech Gesture and Speech SynthesisTrinity Speech-Gesture Dataset II (test)
WER8.85
5
Gesture Motion SynthesisSpeech and 3D gesture (test)
Motion MOS3.44
5
Multimodal AppropriatenessSpeech and 3D gesture (test)
MAS0.53
5
Showing 4 of 4 rows

Other info

Code

Follow for update