Masked Audio Modeling with CLAP and Multi-Objective Learning
About
Most existing masked audio modeling (MAM) methods learn audio representations by masking and reconstructing local spectrogram patches. However, the reconstruction loss mainly accounts for the signal-level quality of the reconstructed spectrogram and is still limited in extracting high-level audio semantics. In this paper, we propose to enhance the semantic modeling of MAM by distilling cross-modality knowledge from contrastive language-audio pretraining (CLAP) representations for both masked and unmasked regions (MAM-CLAP) and leveraging a multi-objective learning strategy with a supervised classification branch (SupMAM), thereby providing more semantic knowledge for MAM and enabling it to effectively learn global features from labels. Experiments show that our methods significantly improve the performance on multiple downstream tasks. Furthermore, by combining our MAM-CLAP with SupMAM, we can achieve new state-of-the-art results on various audio and speech classification tasks, exceeding previous self-supervised learning and supervised pretraining methods.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Audio Classification | ESC-50 | Accuracy97.6 | 325 | |
| Audio Classification | AudioSet 20K | mAP38.6 | 128 | |
| Audio Classification | AudioSet 2M | mAP48.5 | 79 | |
| Audio Classification | SPC V2 | Accuracy98.7 | 65 | |
| Keyword Spotting | Speech Commands V2 | Accuracy98.7 | 61 | |
| Audio Event Tagging | AudioSet AS-2M (full) | mAP48.5 | 33 | |
| Audio Event Tagging | AudioSet (AS-20K) | mAP38.6 | 24 |