Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Fewer-token Neural Speech Codec with Time-invariant Codes

About

Language model based text-to-speech (TTS) models, like VALL-E, have gained attention for their outstanding in-context learning capability in zero-shot scenarios. Neural speech codec is a critical component of these models, which can convert speech into discrete token representations. However, excessive token sequences from the codec may negatively affect prediction accuracy and restrict the progression of Language model based TTS models. To address this issue, this paper proposes a novel neural speech codec with time-invariant codes named TiCodec. By encoding and quantizing time-invariant information into a separate code, TiCodec can reduce the amount of frame-level information that needs encoding, effectively decreasing the number of tokens as codes of speech. Furthermore, this paper introduces a time-invariant encoding consistency loss to enhance the consistency of time-invariant code within an utterance and force it to capture more global information, which can benefit the zero-shot TTS task. Experimental results demonstrate that TiCodec can not only enhance the quality of reconstruction speech with fewer tokens but also increase the similarity and naturalness, as well as reduce the word error rate of the synthesized speech by the TTS model.

Yong Ren, Tao Wang, Jiangyan Yi, Le Xu, Jianhua Tao, Chuyuan Zhang, Junzuo Zhou• 2023

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionLibriSpeech clean (test)
WER9.4
833
Text-to-SpeechSeed-TTS (eval)
WER12.9
39
Voice ConversionVCTK
WER0.5
21
Speech RecognitionSwitchboard
WER29.1
18
Text-to-SpeechLibriTTS clean (test)
WER0.115
15
Audio Encoding and Decoding EfficiencyNVIDIA A6000 Efficiency Benchmark
RTF (Encoding)0.0021
12
Showing 6 of 6 rows

Other info

Follow for update