Glow-TTS: A Generative Flow for Text-to-Speech via Monotonic Alignment Search
About
Recently, text-to-speech (TTS) models such as FastSpeech and ParaNet have been proposed to generate mel-spectrograms from text in parallel. Despite the advantage, the parallel TTS models cannot be trained without guidance from autoregressive TTS models as their external aligners. In this work, we propose Glow-TTS, a flow-based generative model for parallel TTS that does not require any external aligner. By combining the properties of flows and dynamic programming, the proposed model searches for the most probable monotonic alignment between text and the latent representation of speech on its own. We demonstrate that enforcing hard monotonic alignments enables robust TTS, which generalizes to long utterances, and employing generative flows enables fast, diverse, and controllable speech synthesis. Glow-TTS obtains an order-of-magnitude speed-up over the autoregressive model, Tacotron 2, at synthesis with comparable speech quality. We further show that our model can be easily extended to a multi-speaker setting.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Speech Synthesis | LJ Speech (test) | MOS4.01 | 36 | |
| Text-to-Speech | LJSpeech (test) | CMOS0.934 | 20 | |
| Text-to-Speech | LibriTTS (test) | MOS3.45 | 16 | |
| Text-to-Speech | Harvard sentences | WER3.97 | 8 | |
| Speech Synthesis | Manchu Speech Dataset (test) | MOS3.89 | 8 | |
| Text-to-Speech | LJ Speech (val) | Time to 5% WER2.5 | 6 | |
| Text-to-Speech | ParaNet 100 sentences (test) | Repeat Errors0.00e+0 | 6 | |
| Speech Synthesis | LJSpeech (test) | RTF0.021 | 6 | |
| Text-to-Speech | Speech Synthesis Inference Laboratory Setting | RTF0.47 | 5 |