Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Video Representation Learning with Visual Tempo Consistency

About

Visual tempo, which describes how fast an action goes, has shown its potential in supervised action recognition. In this work, we demonstrate that visual tempo can also serve as a self-supervision signal for video representation learning. We propose to maximize the mutual information between representations of slow and fast videos via hierarchical contrastive learning (VTHCL). Specifically, by sampling the same instance at slow and fast frame rates respectively, we can obtain slow and fast video frames which share the same semantics but contain different visual tempos. Video representations learned from VTHCL achieve the competitive performances under the self-supervision evaluation protocol for action recognition on UCF-101 (82.1\%) and HMDB-51 (49.2\%). Moreover, comprehensive experiments suggest that the learned representations are generalized well to other downstream tasks including action detection on AVA and action anticipation on Epic-Kitchen. Finally, we propose Instance Correspondence Map (ICM) to visualize the shared semantics captured by contrastive learning.

Ceyuan Yang, Yinghao Xu, Bo Dai, Bolei Zhou• 2020

Related benchmarks

TaskDatasetResultRank
Action RecognitionUCF101
Accuracy82.1
365
Action RecognitionUCF101 (mean of 3 splits)
Accuracy80.6
357
Action RecognitionUCF101 (test)--
307
Video ClassificationKinetics 400 (val)--
204
Action RecognitionUCF101 (3 splits)
Accuracy82.1
155
Action RecognitionUCF-101
Top-1 Acc82.1
147
Action ClassificationHMDB51 (over all three splits)
Accuracy48.6
121
Video Action RecognitionHMDB-51 (3 splits)
Accuracy49.2
116
Video RecognitionHMDB51
Accuracy49.2
89
Action RecognitionHMDB51
Accuracy (HMDB51)49.2
78
Showing 10 of 16 rows

Other info

Follow for update