Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Masked Audio Modeling with CLAP and Multi-Objective Learning

About

Most existing masked audio modeling (MAM) methods learn audio representations by masking and reconstructing local spectrogram patches. However, the reconstruction loss mainly accounts for the signal-level quality of the reconstructed spectrogram and is still limited in extracting high-level audio semantics. In this paper, we propose to enhance the semantic modeling of MAM by distilling cross-modality knowledge from contrastive language-audio pretraining (CLAP) representations for both masked and unmasked regions (MAM-CLAP) and leveraging a multi-objective learning strategy with a supervised classification branch (SupMAM), thereby providing more semantic knowledge for MAM and enabling it to effectively learn global features from labels. Experiments show that our methods significantly improve the performance on multiple downstream tasks. Furthermore, by combining our MAM-CLAP with SupMAM, we can achieve new state-of-the-art results on various audio and speech classification tasks, exceeding previous self-supervised learning and supervised pretraining methods.

Yifei Xin, Xiulian Peng, Yan Lu• 2024

Related benchmarks

TaskDatasetResultRank
Audio ClassificationESC-50
Accuracy97.6
325
Audio ClassificationAudioSet 20K
mAP38.6
128
Audio ClassificationAudioSet 2M
mAP48.5
79
Audio ClassificationSPC V2
Accuracy98.7
65
Keyword SpottingSpeech Commands V2
Accuracy98.7
61
Audio Event TaggingAudioSet AS-2M (full)
mAP48.5
33
Audio Event TaggingAudioSet (AS-20K)
mAP38.6
24
Showing 7 of 7 rows

Other info

Follow for update