Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-Supervised Visual Learning by Variable Playback Speeds Prediction of a Video

About

We propose a self-supervised visual learning method by predicting the variable playback speeds of a video. Without semantic labels, we learn the spatio-temporal visual representation of the video by leveraging the variations in the visual appearance according to different playback speeds under the assumption of temporal coherence. To learn the spatio-temporal visual variations in the entire video, we have not only predicted a single playback speed but also generated clips of various playback speeds and directions with randomized starting points. Hence the visual representation can be successfully learned from the meta information (playback speeds and directions) of the video. We also propose a new layer dependable temporal group normalization method that can be applied to 3D convolutional networks to improve the representation learning performance where we divide the temporal features into several groups and normalize each one using the different corresponding parameters. We validate the effectiveness of our method by fine-tuning it to the action recognition and video retrieval tasks on UCF-101 and HMDB-51.

Hyeon Cho, Taehoon Kim, Hyung Jin Chang, Wonjun Hwang• 2020

Related benchmarks

TaskDatasetResultRank
Action RecognitionUCF101 (mean of 3 splits)
Accuracy70.4
357
Action RecognitionUCF101 (test)--
307
Action RecognitionHMDB51 (test)
Accuracy0.368
249
Video Action RecognitionUCF101
Top-1 Acc74.8
153
Action ClassificationHMDB51 (over all three splits)
Accuracy34.3
121
Video Action RecognitionHMDB51
Top-1 Accuracy36.8
103
Video RetrievalUCF101 (1)
Top-1 Acc24.6
92
Video RetrievalUCF101
Top-1 Acc24.6
63
Video RetrievalHMDB51
Top-1 Accuracy10.3
39
Action RetrievalUCF101 (Split 1)
Rank@124.6
10
Showing 10 of 11 rows

Other info

Follow for update