Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FASTER Recurrent Networks for Efficient Video Classification

About

Typical video classification methods often divide a video into short clips, do inference on each clip independently, then aggregate the clip-level predictions to generate the video-level results. However, processing visually similar clips independently ignores the temporal structure of the video sequence, and increases the computational cost at inference time. In this paper, we propose a novel framework named FASTER, i.e., Feature Aggregation for Spatio-TEmporal Redundancy. FASTER aims to leverage the redundancy between neighboring clips and reduce the computational cost by learning to aggregate the predictions from models of different complexities. The FASTER framework can integrate high quality representations from expensive models to capture subtle motion information and lightweight representations from cheap models to cover scene changes in the video. A new recurrent network (i.e., FAST-GRU) is designed to aggregate the mixture of different representations. Compared with existing approaches, FASTER can reduce the FLOPs by over 10x? while maintaining the state-of-the-art accuracy across popular datasets, such as Kinetics, UCF-101 and HMDB-51.

Linchao Zhu, Laura Sevilla-Lara, Du Tran, Matt Feiszli, Yi Yang, Heng Wang• 2019

Related benchmarks

TaskDatasetResultRank
Action RecognitionUCF101
Accuracy96.9
365
Action RecognitionUCF101 (test)--
307
Video Action RecognitionHMDB-51 (3 splits)
Accuracy75.7
116
Video RecognitionHMDB51
Accuracy75.7
89
Action RecognitionKinetics
Top-1 Acc75.3
83
Video RecognitionUCF101--
64
Video ClassificationUCF101 (3-split average)
Accuracy96.9
41
Showing 7 of 7 rows

Other info

Code

Follow for update