Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Temporally Coherent Embeddings for Self-Supervised Video Representation Learning

About

This paper presents TCE: Temporally Coherent Embeddings for self-supervised video representation learning. The proposed method exploits inherent structure of unlabeled video data to explicitly enforce temporal coherency in the embedding space, rather than indirectly learning it through ranking or predictive proxy tasks. In the same way that high-level visual information in the world changes smoothly, we believe that nearby frames in learned representations will benefit from demonstrating similar properties. Using this assumption, we train our TCE model to encode videos such that adjacent frames exist close to each other and videos are separated from one another. Using TCE we learn robust representations from large quantities of unlabeled video data. We thoroughly analyse and evaluate our self-supervised learned TCE models on a downstream task of video action recognition using multiple challenging benchmarks (Kinetics400, UCF101, HMDB51). With a simple but effective 2D-CNN backbone and only RGB stream inputs, TCE pre-trained representations outperform all previous selfsupervised 2D-CNN and 3D-CNN pre-trained on UCF101. The code and pre-trained models for this paper can be downloaded at: https://github.com/csiro-robotics/TCE

Joshua Knights, Ben Harwood, Daniel Ward, Anthony Vanderkop, Olivia Mackenzie-Ross, Peyman Moghadam• 2020

Related benchmarks

TaskDatasetResultRank
Action RecognitionUCF101 (mean of 3 splits)
Accuracy71.2
357
Action RecognitionUCF101 (test)--
307
Action RecognitionHMDB51
Top-1 Acc36.6
225
Action ClassificationHMDB51 (over all three splits)
Accuracy36.6
121
Action RecognitionUCF101 (1)
Accuracy71.2
29
Showing 5 of 5 rows

Other info

Code

Follow for update