Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Scaling Transformers for Low-Bitrate High-Quality Speech Coding

About

The tokenization of speech with neural audio codec models is a vital part of modern AI pipelines for the generation or understanding of speech, alone or in a multimodal context. Traditionally such tokenization models have concentrated on low parameter-count architectures using only components with strong inductive biases. In this work we show that by scaling a transformer architecture with large parameter count to this problem, and applying a flexible Finite Scalar Quantization (FSQ) based bottleneck, it is possible to reach state-of-the-art speech quality at extremely low bit-rates of $400$ or $700$ bits-per-second. The trained models strongly out-perform existing baselines in both objective and subjective tests.

Julian D Parker, Anton Smirnov, Jordi Pons, CJ Carr, Zack Zukowski, Zach Evans, Xubo Liu• 2024

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionLibriSpeech clean (test)
WER11.8
1156
Speech ReconstructionLibriTTS clean (test)
PESQ1.787
63
Speech ReconstructionLibrispeech (test-clean)
UT MOS4.23
59
Image ReconstructionImageNet
PSNR24.8198
56
Speech ReconstructionLibriSpeech English (test-clean)
SIM0.62
54
Speech ReconstructionAISHELL-2 Chinese
SIM0.45
54
Text-to-SpeechSeed-TTS (eval)
WER10.9
39
Text-to-SpeechLibriTTS clean (test)
WER0.09
30
Audio ReconstructionMusicDB (test)--
28
Image ReconstructionCOCO (test)
CVU0.8607
24
Showing 10 of 25 rows

Other info

Follow for update