Contrastive Audio-Visual Masked Autoencoder
About
In this paper, we first extend the recent Masked Auto-Encoder (MAE) model from a single modality to audio-visual multi-modalities. Subsequently, we propose the Contrastive Audio-Visual Masked Auto-Encoder (CAV-MAE) by combining contrastive learning and masked data modeling, two major self-supervised learning frameworks, to learn a joint and coordinated audio-visual representation. Our experiments show that the contrastive audio-visual correspondence learning objective not only enables the model to perform audio-visual retrieval tasks, but also helps the model learn a better joint representation. As a result, our fully self-supervised pretrained CAV-MAE achieves a new SOTA accuracy of 65.9% on VGGSound, and is comparable with the previous best supervised pretrained model on AudioSet in the audio-visual event classification task. Code and pretrained models are at https://github.com/yuangongnd/cav-mae.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Audio Classification | ESC-50 | Accuracy83.2 | 325 | |
| Audio Classification | AudioSet 20K | mAP34.2 | 128 | |
| Audio Classification | AudioSet 2M | mAP44.9 | 79 | |
| Multimodal Sentiment Analysis | MOSI | Accuracy59 | 54 | |
| Audio Classification | VGG-Sound | Top-1 Accuracy59.5 | 50 | |
| Video Classification | VGGSound-C unimodal (test) | Accuracy (Gaussian)52.78 | 25 | |
| Audio-Visual Classification | VGGSound | Top-1 Acc65.5 | 24 | |
| Classification | VGGSound-C (test) | Error Rate (Gauss.)37.3 | 24 | |
| Classification | AudioSet AS-2M | -- | 21 | |
| Multimodal Event Classification | VGGSound-C severity level 5 (test) | Gauss. Corruption Accuracy52.9 | 20 |