Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Temporal Distinct Representation Learning for Action Recognition

About

Motivated by the previous success of Two-Dimensional Convolutional Neural Network (2D CNN) on image recognition, researchers endeavor to leverage it to characterize videos. However, one limitation of applying 2D CNN to analyze videos is that different frames of a video share the same 2D CNN kernels, which may result in repeated and redundant information utilization, especially in the spatial semantics extraction process, hence neglecting the critical variations among frames. In this paper, we attempt to tackle this issue through two ways. 1) Design a sequential channel filtering mechanism, i.e., Progressive Enhancement Module (PEM), to excite the discriminative channels of features from different frames step by step, and thus avoid repeated information extraction. 2) Create a Temporal Diversity Loss (TD Loss) to force the kernels to concentrate on and capture the variations among frames rather than the image regions with similar appearance. Our method is evaluated on benchmark temporal reasoning datasets Something-Something V1 and V2, and it achieves visible improvements over the best competitor by 2.4% and 1.3%, respectively. Besides, performance improvements over the 2D-CNN-based state-of-the-arts on the large-scale dataset Kinetics are also witnessed.

Junwu Weng, Donghao Luo, Yabiao Wang, Ying Tai, Chengjie Wang, Jilin Li, Feiyue Huang, Xudong Jiang, Junsong Yuan• 2020

Related benchmarks

TaskDatasetResultRank
Action RecognitionSomething-Something v2 (test)
Top-1 Acc65
333
Action RecognitionSomething-something v1 (test)
Top-1 Accuracy52
189
Action RecognitionSomething-Something v2 (test val)
Top-1 Accuracy63.8
187
Video ClassificationKinetics-400
Top-1 Acc76.9
131
Action RecognitionSomething-Something V1 (test val)
Top-1 Acc50.9
48
Showing 5 of 5 rows

Other info

Follow for update