Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

EquiAV: Leveraging Equivariance for Audio-Visual Contrastive Learning

About

Recent advancements in self-supervised audio-visual representation learning have demonstrated its potential to capture rich and comprehensive representations. However, despite the advantages of data augmentation verified in many learning methods, audio-visual learning has struggled to fully harness these benefits, as augmentations can easily disrupt the correspondence between input pairs. To address this limitation, we introduce EquiAV, a novel framework that leverages equivariance for audio-visual contrastive learning. Our approach begins with extending equivariance to audio-visual learning, facilitated by a shared attention-based transformation predictor. It enables the aggregation of features from diverse augmentations into a representative embedding, providing robust supervision. Notably, this is achieved with minimal computational overhead. Extensive ablation studies and qualitative results verify the effectiveness of our method. EquiAV outperforms previous works across various audio-visual benchmarks. The code is available on https://github.com/JongSuk1/EquiAV.

Jongsuk Kim, Hyeongkeun Lee, Kyeongha Rho, Junmo Kim, Joon Son Chung• 2024

Related benchmarks

TaskDatasetResultRank
Action RecognitionKinetics-400
Top-1 Acc57.3
413
Action RecognitionUCF101
Accuracy89.7
365
Audio ClassificationESC-50
Accuracy96
325
Action RecognitionHMDB51
Accuracy (HMDB51)64.4
78
Audio-Visual ClassificationVGGSound
Top-1 Acc67.1
24
Audio-Visual Event ClassificationAudioSet 2M
mAP (Audio-only)49.1
16
Video RetrievalVGGSound
R@128.5
15
Audio-to-Video RetrievalMSR-VTT
Recall@114.4
13
Audio-Visual Event ClassificationAudioSet 20K
mAP (Audio-only)42.4
11
Video-to-Audio RetrievalMSR-VTT
Recall@113.8
10
Showing 10 of 14 rows

Other info

Code

Follow for update