Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Masked Autoencoders that Listen

About

This paper studies a simple extension of image-based Masked Autoencoders (MAE) to self-supervised representation learning from audio spectrograms. Following the Transformer encoder-decoder design in MAE, our Audio-MAE first encodes audio spectrogram patches with a high masking ratio, feeding only the non-masked tokens through encoder layers. The decoder then re-orders and decodes the encoded context padded with mask tokens, in order to reconstruct the input spectrogram. We find it beneficial to incorporate local window attention in the decoder, as audio spectrograms are highly correlated in local time and frequency bands. We then fine-tune the encoder with a lower masking ratio on target datasets. Empirically, Audio-MAE sets new state-of-the-art performance on six audio and speech classification tasks, outperforming other recent models that use external supervised pre-training. The code and models will be at https://github.com/facebookresearch/AudioMAE.

Po-Yao Huang, Hu Xu, Juncheng Li, Alexei Baevski, Michael Auli, Wojciech Galuba, Florian Metze, Christoph Feichtenhofer• 2022

Related benchmarks

TaskDatasetResultRank
Audio ClassificationESC-50
Accuracy97.4
325
Audio ClassificationAudioSet 20K
mAP37.6
128
Audio ClassificationAudioSet 2M
mAP47.4
79
Audio ClassificationSPC V2
Accuracy98.3
65
Audio ClassificationESC50
Top-1 Acc93.6
64
Keyword SpottingSpeech Commands V2
Accuracy98.3
61
Speaker IdentificationVoxCeleb1
Accuracy94.8
58
ClassificationAudioSet (test)
mAP47.3
57
Audio RecognitionSpeech Commands V2
Accuracy98.3
43
Audio ClassificationSpeech Commands V2 (test)
Accuracy98.3
35
Showing 10 of 41 rows

Other info

Code

Follow for update