Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-Supervised Video Representation Learning with Meta-Contrastive Network

About

Self-supervised learning has been successfully applied to pre-train video representations, which aims at efficient adaptation from pre-training domain to downstream tasks. Existing approaches merely leverage contrastive loss to learn instance-level discrimination. However, lack of category information will lead to hard-positive problem that constrains the generalization ability of this kind of methods. We find that the multi-task process of meta learning can provide a solution to this problem. In this paper, we propose a Meta-Contrastive Network (MCN), which combines the contrastive learning and meta learning, to enhance the learning ability of existing self-supervised approaches. Our method contains two training stages based on model-agnostic meta learning (MAML), each of which consists of a contrastive branch and a meta branch. Extensive evaluations demonstrate the effectiveness of our method. For two downstream tasks, i.e., video action recognition and video retrieval, MCN outperforms state-of-the-art approaches on UCF101 and HMDB51 datasets. To be more specific, with R(2+1)D backbone, MCN achieves Top-1 accuracies of 84.8% and 54.5% for video action recognition, as well as 52.5% and 23.7% for video retrieval.

Yuanze Lin, Xun Guo, Yan Lu• 2021

Related benchmarks

TaskDatasetResultRank
Action RecognitionUCF101 (test)--
307
Action RecognitionHMDB51 (test)--
249
Video Action RecognitionUCF101
Top-1 Acc85.4
153
Video RetrievalHMDB51 (test)
Recall@124.1
76
Video Action RecognitionHMDB51 (test)
Accuracy59.3
73
Video RetrievalUCF101 (test)
Top-1 Acc53.8
55
Action RecognitionUCF101 1 (test)
Accuracy89.7
50
Video Action RecognitionUCF101 (test)
Top-1 Acc89.7
46
Action RecognitionHMDB51 1 (test)
Top-1 Accuracy59.3
40
Video Action RecognitionHMDB51
Accuracy54.8
13
Showing 10 of 10 rows

Other info

Follow for update