Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Deep Temporal Linear Encoding Networks

About

The CNN-encoding of features from entire videos for the representation of human actions has rarely been addressed. Instead, CNN work has focused on approaches to fuse spatial and temporal networks, but these were typically limited to processing shorter sequences. We present a new video representation, called temporal linear encoding (TLE) and embedded inside of CNNs as a new layer, which captures the appearance and motion throughout entire videos. It encodes this aggregated information into a robust video feature representation, via end-to-end learning. Advantages of TLEs are: (a) they encode the entire video into a compact feature representation, learning the semantics and a discriminative feature space; (b) they are applicable to all kinds of networks like 2D and 3D CNNs for video classification; and (c) they model feature interactions in a more expressive way and without loss of information. We conduct experiments on two challenging human action datasets: HMDB51 and UCF101. The experiments show that TLE outperforms current state-of-the-art methods on both datasets.

Ali Diba, Vivek Sharma, Luc Van Gool• 2016

Related benchmarks

TaskDatasetResultRank
Action RecognitionUCF101
Accuracy93.8
365
Action RecognitionHMDB-51 (average of three splits)
Top-1 Acc71.1
204
Action RecognitionUCF101 (3 splits)
Accuracy95.6
155
Action RecognitionHMDB51 (split 1)--
75
Action RecognitionHMDB-51 v1
Accuracy68.8
31
Action RecognitionUCF101 (1)--
29
Showing 6 of 6 rows

Other info

Follow for update