Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

High-Fidelity Audio Compression with Improved RVQGAN

About

Language models have been successfully used to model natural signals, such as images, speech, and music. A key component of these models is a high quality neural compression model that can compress high-dimensional natural signals into lower dimensional discrete tokens. To that end, we introduce a high-fidelity universal neural audio compression algorithm that achieves ~90x compression of 44.1 KHz audio into tokens at just 8kbps bandwidth. We achieve this by combining advances in high-fidelity audio generation with better vector quantization techniques from the image domain, along with improved adversarial and reconstruction losses. We compress all domains (speech, environment, music, etc.) with a single universal model, making it widely applicable to generative modeling of all audio. We compare with competing audio compression algorithms, and find our method outperforms them significantly. We provide thorough ablations for every design choice, as well as open-source code and trained model weights. We hope our work can lay the foundation for the next generation of high-fidelity audio modeling.

Rithesh Kumar, Prem Seetharaman, Alejandro Luebs, Ishaan Kumar, Kundan Kumar• 2023

Related benchmarks

TaskDatasetResultRank
Video-to-Audio GenerationVGGSound (test)
FAD1.06
83
Text-to-SpeechLibriSpeech clean (test)
WER2.26
66
Audio ReconstructionAudioSet (eval)
Mel Distance0.4581
63
Speech ReconstructionLibriTTS clean (test)
PESQ3.908
63
Speech ReconstructionLibrispeech (test-clean)
UT MOS2.5845
59
Speech ReconstructionAISHELL-2 Chinese
SIM0.84
54
Speech ReconstructionLibriSpeech English (test-clean)
SIM0.89
54
Speech ReconstructionLibriTTS (test-other)
UTMOS3.4
44
Audio-Visual Question AnsweringAVQA
Accuracy65.61
37
Audio ReconstructionMusicDB (test)
Mel Distance0.3578
28
Showing 10 of 76 rows
...

Other info

Code

Follow for update