MAViL: Masked Audio-Video Learners
About
We present Masked Audio-Video Learners (MAViL) to train audio-visual representations. Our approach learns with three complementary forms of self-supervision: (1) reconstruction of masked audio and video input data, (2) intra- and inter-modal contrastive learning with masking, and (3) self-training by reconstructing joint audio-video contextualized features learned from the first two objectives. Pre-training with MAViL not only enables the model to perform well in audio-visual classification and retrieval tasks but also improves representations of each modality in isolation, without using information from the other modality for fine-tuning or inference. Empirically, MAViL sets a new state-of-the-art on AudioSet (53.1 mAP) and VGGSound (67.1% accuracy). For the first time, a self-supervised audio-visual model outperforms ones that use external supervision on these benchmarks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Audio Classification | ESC-50 | Accuracy94.4 | 325 | |
| Audio Classification | AudioSet 20K | -- | 128 | |
| Emotion Recognition | IEMOCAP 4-class (test) | WAR54.94 | 46 | |
| Audio Retrieval | AudioCaps | R@149.3 | 42 | |
| Audio-Visual Classification | VGGSound | Top-1 Acc67.1 | 24 | |
| Classification | AudioSet AS-2M | -- | 21 | |
| Audio Retrieval | Clotho | R@123.3 | 20 | |
| Audio-to-Visual Retrieval | MSR-VTT (test) | R@123.8 | 18 | |
| Audio-Visual Event Classification | VGGSound (test) | -- | 18 | |
| Audio-Visual Event Classification | AudioSet 2M | mAP (Audio-only)48.7 | 16 |