Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

VoiceStar: Robust Zero-Shot Autoregressive TTS with Duration Control and Extrapolation

About

We present VoiceStar, the first zero-shot TTS model that achieves both output duration control and extrapolation. VoiceStar is an autoregressive encoder-decoder neural codec language model, that leverages a novel Progress-Monitoring Rotary Position Embedding (PM-RoPE) and is trained with Continuation-Prompt Mixed (CPM) training. PM-RoPE enables the model to better align text and speech tokens, indicates the target duration for the generated speech, and also allows the model to generate speech waveforms much longer in duration than those seen during. CPM training also helps to mitigate the training/inference mismatch, and significantly improves the quality of the generated speech in terms of speaker similarity and intelligibility. VoiceStar outperforms or is on par with current state-of-the-art models on short-form benchmarks such as Librispeech and Seed-TTS, and significantly outperforms these models on long-form/extrapolation benchmarks (20-50s) in terms of intelligibility and naturalness. Code and models: https://github.com/jasonppy/VoiceStar. Audio samples: https://jasonppy.github.io/VoiceStar_web

Puyuan Peng, Shang-Wen Li, Abdelrahman Mohamed, David Harwath• 2025

Related benchmarks

TaskDatasetResultRank
Text-to-SpeechSeed-TTS en (test)
WER2.2
90
Text-to-SpeechLibriSpeech PC clean (test)
WER2.14
31
Text-to-SpeechEmilia EN speaking-rate
MUSHRA Score60
9
Showing 3 of 3 rows

Other info

Follow for update