Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

EnCodecMAE: Leveraging neural codecs for universal audio representation learning

About

The goal of universal audio representation learning is to obtain foundational models that can be used for a variety of downstream tasks involving speech, music and environmental sounds. To approach this problem, methods inspired by works on self-supervised learning for NLP, like BERT, or computer vision, like masked autoencoders (MAE), are often adapted to the audio domain. In this work, we propose masking representations of the audio signal, and training a MAE to reconstruct the masked segments. The reconstruction is done by predicting the discrete units generated by EnCodec, a neural audio codec, from the unmasked inputs. We evaluate this approach, which we call EnCodecMAE, on a wide range of tasks involving speech, music and environmental sounds. Our best model outperforms various state-of-the-art audio representation models in terms of global performance. Additionally, we evaluate the resulting representations in the challenging task of automatic speech recognition (ASR), obtaining decent results and paving the way for a universal audio representation.

Leonardo Pepino, Pablo Riera, Luciana Ferrer• 2023

Related benchmarks

TaskDatasetResultRank
Environmental Sound ClassificationESC-50 (5-fold cross-validation)
Accuracy88.25
38
Speech Emotion RecognitionIEMOCAP (five-fold/ten-fold cross-validation)
WA67.8
25
Musical Instrument ClassificationNSynth (test)
Accuracy77.23
22
Audio ClassificationUrbanSound8K (official 10 fold split)
Accuracy (%)85.42
15
Speech Command RecognitionSPCV2 (evaluation)
Accuracy97.3
5
Speaker IdentificationVOX 1 (evaluation)
Accuracy71.65
5
Showing 6 of 6 rows

Other info

Follow for update