Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-Supervised Spatiotemporal Feature Learning via Video Rotation Prediction

About

The success of deep neural networks generally requires a vast amount of training data to be labeled, which is expensive and unfeasible in scale, especially for video collections. To alleviate this problem, in this paper, we propose 3DRotNet: a fully self-supervised approach to learn spatiotemporal features from unlabeled videos. A set of rotations are applied to all videos, and a pretext task is defined as prediction of these rotations. When accomplishing this task, 3DRotNet is actually trained to understand the semantic concepts and motions in videos. In other words, it learns a spatiotemporal video representation, which can be transferred to improve video understanding tasks in small datasets. Our extensive experiments successfully demonstrate the effectiveness of the proposed framework on action recognition, leading to significant improvements over the state-of-the-art self-supervised methods. With the self-supervised pre-trained 3DRotNet from large datasets, the recognition accuracy is boosted up by 20.4% on UCF101 and 16.7% on HMDB51 respectively, compared to the models trained from scratch.

Longlong Jing, Xiaodong Yang, Jingen Liu, Yingli Tian• 2018

Related benchmarks

TaskDatasetResultRank
Action RecognitionUCF101
Accuracy62.9
365
Action RecognitionUCF101 (mean of 3 splits)
Accuracy76.6
357
Action RecognitionUCF101 (test)
Accuracy62.9
307
Action RecognitionHMDB51 (test)
Accuracy0.371
249
Action RecognitionHMDB51
Top-1 Acc41.4
225
Action RecognitionHMDB-51 (average of three splits)
Top-1 Acc47
204
Action RecognitionHMDB51
3-Fold Accuracy33.7
191
Action RecognitionUCF101 (3 splits)
Accuracy76.7
155
Video Action RecognitionUCF101
Top-1 Acc62.9
153
Action RecognitionUCF-101
Top-1 Acc77.5
147
Showing 10 of 24 rows

Other info

Follow for update