Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Time-Equivariant Contrastive Video Representation Learning

About

We introduce a novel self-supervised contrastive learning method to learn representations from unlabelled videos. Existing approaches ignore the specifics of input distortions, e.g., by learning invariance to temporal transformations. Instead, we argue that video representation should preserve video dynamics and reflect temporal manipulations of the input. Therefore, we exploit novel constraints to build representations that are equivariant to temporal transformations and better capture video dynamics. In our method, relative temporal transformations between augmented clips of a video are encoded in a vector and contrasted with other transformation vectors. To support temporal equivariance learning, we additionally propose the self-supervised classification of two clips of a video into 1. overlapping 2. ordered, or 3. unordered. Our experiments show that time-equivariant representations achieve state-of-the-art results in video retrieval and action recognition benchmarks on UCF101, HMDB51, and Diving48.

Simon Jenni, Hailin Jin• 2021

Related benchmarks

TaskDatasetResultRank
Action RecognitionUCF101 (test)--
307
Action RecognitionHMDB51 (test)--
249
Action RecognitionUCF101 (3 splits)
Accuracy88.2
155
Video Action RecognitionHMDB-51 (3 splits)
Accuracy63.6
116
Video RetrievalUCF101 (1)
Top-1 Acc63.6
92
Video RetrievalHMDB51 (test)
Recall@136.4
76
Video RetrievalUCF101 (test)--
55
Action RecognitionUCF101 1 (test)
Accuracy87.1
50
Video RetrievalHMDB51 (first split)
Top-1 Accuracy32.2
49
Action RecognitionHMDB51 1 (test)
Top-1 Accuracy59.8
40
Showing 10 of 10 rows

Other info

Follow for update