Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MAViL: Masked Audio-Video Learners

About

We present Masked Audio-Video Learners (MAViL) to train audio-visual representations. Our approach learns with three complementary forms of self-supervision: (1) reconstruction of masked audio and video input data, (2) intra- and inter-modal contrastive learning with masking, and (3) self-training by reconstructing joint audio-video contextualized features learned from the first two objectives. Pre-training with MAViL not only enables the model to perform well in audio-visual classification and retrieval tasks but also improves representations of each modality in isolation, without using information from the other modality for fine-tuning or inference. Empirically, MAViL sets a new state-of-the-art on AudioSet (53.1 mAP) and VGGSound (67.1% accuracy). For the first time, a self-supervised audio-visual model outperforms ones that use external supervision on these benchmarks.

Po-Yao Huang, Vasu Sharma, Hu Xu, Chaitanya Ryali, Haoqi Fan, Yanghao Li, Shang-Wen Li, Gargi Ghosh, Jitendra Malik, Christoph Feichtenhofer• 2022

Related benchmarks

TaskDatasetResultRank
Audio ClassificationESC-50
Accuracy94.4
325
Audio ClassificationAudioSet 20K--
128
Emotion RecognitionIEMOCAP 4-class (test)
WAR54.94
46
Audio RetrievalAudioCaps
R@149.3
42
Audio-Visual ClassificationVGGSound
Top-1 Acc67.1
24
ClassificationAudioSet AS-2M--
21
Audio RetrievalClotho
R@123.3
20
Audio-to-Visual RetrievalMSR-VTT (test)
R@123.8
18
Audio-Visual Event ClassificationVGGSound (test)--
18
Audio-Visual Event ClassificationAudioSet 2M
mAP (Audio-only)48.7
16
Showing 10 of 16 rows

Other info

Code

Follow for update