Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Contrastive Audio-Visual Masked Autoencoder

About

In this paper, we first extend the recent Masked Auto-Encoder (MAE) model from a single modality to audio-visual multi-modalities. Subsequently, we propose the Contrastive Audio-Visual Masked Auto-Encoder (CAV-MAE) by combining contrastive learning and masked data modeling, two major self-supervised learning frameworks, to learn a joint and coordinated audio-visual representation. Our experiments show that the contrastive audio-visual correspondence learning objective not only enables the model to perform audio-visual retrieval tasks, but also helps the model learn a better joint representation. As a result, our fully self-supervised pretrained CAV-MAE achieves a new SOTA accuracy of 65.9% on VGGSound, and is comparable with the previous best supervised pretrained model on AudioSet in the audio-visual event classification task. Code and pretrained models are at https://github.com/yuangongnd/cav-mae.

Yuan Gong, Andrew Rouditchenko, Alexander H. Liu, David Harwath, Leonid Karlinsky, Hilde Kuehne, James Glass• 2022

Related benchmarks

TaskDatasetResultRank
Audio ClassificationESC-50
Accuracy83.2
325
Audio ClassificationAudioSet 20K
mAP34.2
128
Audio ClassificationAudioSet 2M
mAP44.9
79
Multimodal Sentiment AnalysisMOSI
Accuracy59
54
Audio ClassificationVGG-Sound
Top-1 Accuracy59.5
50
Video ClassificationVGGSound-C unimodal (test)
Accuracy (Gaussian)52.78
25
Audio-Visual ClassificationVGGSound
Top-1 Acc65.5
24
ClassificationVGGSound-C (test)
Error Rate (Gauss.)37.3
24
ClassificationAudioSet AS-2M--
21
Multimodal Event ClassificationVGGSound-C severity level 5 (test)
Gauss. Corruption Accuracy52.9
20
Showing 10 of 46 rows

Other info

Code

Follow for update