Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RSPNet: Relative Speed Perception for Unsupervised Video Representation Learning

About

We study unsupervised video representation learning that seeks to learn both motion and appearance features from unlabeled video only, which can be reused for downstream tasks such as action recognition. This task, however, is extremely challenging due to 1) the highly complex spatial-temporal information in videos; and 2) the lack of labeled data for training. Unlike the representation learning for static images, it is difficult to construct a suitable self-supervised task to well model both motion and appearance features. More recently, several attempts have been made to learn video representation through video playback speed prediction. However, it is non-trivial to obtain precise speed labels for the videos. More critically, the learnt models may tend to focus on motion pattern and thus may not learn appearance features well. In this paper, we observe that the relative playback speed is more consistent with motion pattern, and thus provide more effective and stable supervision for representation learning. Therefore, we propose a new way to perceive the playback speed and exploit the relative speed between two video clips as labels. In this way, we are able to well perceive speed and learn better motion features. Moreover, to ensure the learning of appearance features, we further propose an appearance-focused task, where we enforce the model to perceive the appearance difference between two video clips. We show that optimizing the two tasks jointly consistently improves the performance on two downstream tasks, namely action recognition and video retrieval. Remarkably, for action recognition on UCF101 dataset, we achieve 93.7% accuracy without the use of labeled data for pre-training, which outperforms the ImageNet supervised pre-trained model. Code and pre-trained models can be found at https://github.com/PeihaoChen/RSPNet.

Peihao Chen, Deng Huang, Dongliang He, Xiang Long, Runhao Zeng, Shilei Wen, Mingkui Tan, Chuang Gan• 2020

Related benchmarks

TaskDatasetResultRank
Action RecognitionUCF101
Accuracy93.7
365
Action RecognitionUCF101 (mean of 3 splits)
Accuracy93.7
357
Action RecognitionSomething-Something v2
Top-1 Accuracy55
341
Action RecognitionUCF101 (test)
Accuracy81.1
307
Action RecognitionHMDB51 (test)
Accuracy0.647
249
Action RecognitionHMDB-51 (average of three splits)
Top-1 Acc44.6
204
Action RecognitionUCF101 (3 splits)
Accuracy81.1
155
Action ClassificationHMDB51 (over all three splits)
Accuracy64.7
121
Video Action RecognitionHMDB-51 (3 splits)
Accuracy44.6
116
Video RetrievalUCF101 (1)
Top-1 Acc41.1
92
Showing 10 of 23 rows

Other info

Code

Follow for update