Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

XKD: Cross-modal Knowledge Distillation with Domain Alignment for Video Representation Learning

About

We present XKD, a novel self-supervised framework to learn meaningful representations from unlabelled videos. XKD is trained with two pseudo objectives. First, masked data reconstruction is performed to learn modality-specific representations from audio and visual streams. Next, self-supervised cross-modal knowledge distillation is performed between the two modalities through a teacher-student setup to learn complementary information. We introduce a novel domain alignment strategy to tackle domain discrepancy between audio and visual modalities enabling effective cross-modal knowledge distillation. Additionally, to develop a general-purpose network capable of handling both audio and visual streams, modality-agnostic variants of XKD are introduced, which use the same pretrained backbone for different audio and visual tasks. Our proposed cross-modal knowledge distillation improves video action classification by $8\%$ to $14\%$ on UCF101, HMDB51, and Kinetics400. Additionally, XKD improves multimodal action classification by $5.5\%$ on Kinetics-Sound. XKD shows state-of-the-art performance in sound classification on ESC50, achieving top-1 accuracy of $96.5\%$.

Pritam Sarkar, Ali Etemad• 2022

Related benchmarks

TaskDatasetResultRank
Action RecognitionKinetics-400
Top-1 Acc56.5
413
Action RecognitionUCF101
Accuracy88.4
365
Audio ClassificationESC-50
Accuracy96.5
325
Video Action RecognitionKinetics 400 (val)
Top-1 Acc80.1
151
Action RecognitionHMDB51
Accuracy (HMDB51)62.2
78
Audio ClassificationESC50
Top-1 Acc96.5
64
Environmental Sound ClassificationFSD50K
mAP58.5
60
Video Action RecognitionHMDB51 (avg over all splits)
Top-1 Acc75.7
56
Video Action RecognitionUCF101 avg over all splits
Top-1 Accuracy95.8
42
Action RecognitionKinetics-Sounds (test)
Top-1 Accuracy81.2
11
Showing 10 of 12 rows

Other info

Code

Follow for update