WaveGlow: A Flow-based Generative Network for Speech Synthesis
About
In this paper we propose WaveGlow: a flow-based network capable of generating high quality speech from mel-spectrograms. WaveGlow combines insights from Glow and WaveNet in order to provide fast, efficient and high-quality audio synthesis, without the need for auto-regression. WaveGlow is implemented using only a single network, trained using only a single cost function: maximizing the likelihood of the training data, which makes the training procedure simple and stable. Our PyTorch implementation produces audio samples at a rate of more than 500 kHz on an NVIDIA V100 GPU. Mean Opinion Scores show that it delivers audio quality as good as the best publicly available WaveNet implementation. All code will be made publicly available online.
Ryan Prenger, Rafael Valle, Bryan Catanzaro• 2018
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Speech Synthesis | LJ Speech (test) | MOS4.34 | 36 | |
| Audio Generation | LJ Speech (test) | LL Score5.026 | 20 | |
| Audio Generation | LibriTTS (dev) | M-STFT1.3099 | 18 | |
| Speech Synthesis | LJSpeech | MOS3.81 | 12 | |
| Audio Synthesis | LJSpeech (unseen) | MAE0.4933 | 10 | |
| Neural Vocoding | LibriTTS clean (dev) | MAE0.5368 | 10 | |
| Neural Vocoding | VCTK 100 audio clips (unseen) | MAE0.5454 | 10 | |
| Vocoding | LibriTTS (dev-other) | MAE0.5096 | 10 | |
| End-to-End Speech Synthesis | End-to-End Speech Synthesis Tacotron2 pipeline | MOS3.69 | 9 | |
| Neural Vocoding | LJSpeech | MOS3.03 | 9 |
Showing 10 of 13 rows