Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-supervised Co-training for Video Representation Learning

About

The objective of this paper is visual-only self-supervised video representation learning. We make the following contributions: (i) we investigate the benefit of adding semantic-class positives to instance-based Info Noise Contrastive Estimation (InfoNCE) training, showing that this form of supervised contrastive learning leads to a clear improvement in performance; (ii) we propose a novel self-supervised co-training scheme to improve the popular infoNCE loss, exploiting the complementary information from different views, RGB streams and optical flow, of the same data source by using one view to obtain positive class samples for the other; (iii) we thoroughly evaluate the quality of the learnt representation on two different downstream tasks: action recognition and video retrieval. In both cases, the proposed approach demonstrates state-of-the-art or comparable performance with other self-supervised approaches, whilst being significantly more efficient to train, i.e. requiring far less training data to achieve similar performance.

Tengda Han, Weidi Xie, Andrew Zisserman• 2020

Related benchmarks

TaskDatasetResultRank
Action RecognitionUCF101
Accuracy87.9
365
Action RecognitionUCF101 (mean of 3 splits)
Accuracy90.6
357
Action RecognitionUCF101 (test)
Accuracy87.9
307
Action RecognitionHMDB51 (test)
Accuracy0.546
249
Action RecognitionHMDB51
Top-1 Acc62.9
225
Action RecognitionUCF101 (3 splits)
Accuracy90.6
155
Video Action RecognitionUCF101
Top-1 Acc90.6
153
Action RecognitionUCF-101
Top-1 Acc87.9
147
Action ClassificationHMDB51 (over all three splits)
Accuracy62.9
121
Video Action RecognitionHMDB-51 (3 splits)
Accuracy62.9
116
Showing 10 of 37 rows

Other info

Code

Follow for update