Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Distilling Audio-Visual Knowledge by Compositional Contrastive Learning

About

Having access to multi-modal cues (e.g. vision and audio) empowers some cognitive tasks to be done faster compared to learning from a single modality. In this work, we propose to transfer knowledge across heterogeneous modalities, even though these data modalities may not be semantically correlated. Rather than directly aligning the representations of different modalities, we compose audio, image, and video representations across modalities to uncover richer multi-modal knowledge. Our main idea is to learn a compositional embedding that closes the cross-modal semantic gap and captures the task-relevant semantics, which facilitates pulling together representations across modalities by compositional contrastive learning. We establish a new, comprehensive multi-modal distillation benchmark on three video datasets: UCF101, ActivityNet, and VGGSound. Moreover, we demonstrate that our model significantly outperforms a variety of existing knowledge distillation methods in transferring audio-visual knowledge to improve video representation learning. Code is released here: https://github.com/yanbeic/CCL.

Yanbei Chen, Yongqin Xian, A. Sophia Koepke, Ying Shan, Zeynep Akata• 2021

Related benchmarks

TaskDatasetResultRank
Action ClassificationActivityNet (val)
Top-1 Acc47.3
30
Video RecognitionUCF51 (split 1)
Top-1 Acc70
27
Video RetrievalUCF51
Recall@10.676
27
Video RetrievalActivityNet
R@139.5
25
Video RetrievalVGGSound
R@128.1
15
Video RecognitionVGGSound
Top-1 Accuracy23.6
4
Showing 6 of 6 rows

Other info

Code

Follow for update