Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Grad-TTS: A Diffusion Probabilistic Model for Text-to-Speech

About

Recently, denoising diffusion probabilistic models and generative score matching have shown high potential in modelling complex data distributions while stochastic calculus has provided a unified point of view on these techniques allowing for flexible inference schemes. In this paper we introduce Grad-TTS, a novel text-to-speech model with score-based decoder producing mel-spectrograms by gradually transforming noise predicted by encoder and aligned with text input by means of Monotonic Alignment Search. The framework of stochastic differential equations helps us to generalize conventional diffusion probabilistic models to the case of reconstructing data from noise with different parameters and allows to make this reconstruction flexible by explicitly controlling trade-off between sound quality and inference speed. Subjective human evaluation shows that Grad-TTS is competitive with state-of-the-art text-to-speech approaches in terms of Mean Opinion Score. We will make the code publicly available shortly.

Vadim Popov, Ivan Vovk, Vladimir Gogoryan, Tasnima Sadekova, Mikhail Kudinov• 2021

Related benchmarks

TaskDatasetResultRank
Speech SynthesisLJ Speech (test)
MOS3.49
36
Text-to-SpeechLJSpeech (test)
CMOS-0.23
20
Speech SynthesisSpeech and 3D gesture (test)
Speech MOS3.38
6
Speech SynthesisLJSpeech (test)
RTF0.082
6
Co-speech Gesture and Speech SynthesisTrinity Speech-Gesture Dataset II (test)
WER10.39
5
Gesture Motion SynthesisSpeech and 3D gesture (test)
Motion MOS3.13
5
Multimodal AppropriatenessSpeech and 3D gesture (test)
MAS0.43
5
Showing 7 of 7 rows

Other info

Code

Follow for update