Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VideoLSTM Convolves, Attends and Flows for Action Recognition

About

We present a new architecture for end-to-end sequence learning of actions in video, we call VideoLSTM. Rather than adapting the video to the peculiarities of established recurrent or convolutional architectures, we adapt the architecture to fit the requirements of the video medium. Starting from the soft-Attention LSTM, VideoLSTM makes three novel contributions. First, video has a spatial layout. To exploit the spatial correlation we hardwire convolutions in the soft-Attention LSTM architecture. Second, motion not only informs us about the action content, but also guides better the attention towards the relevant spatio-temporal locations. We introduce motion-based attention. And finally, we demonstrate how the attention from VideoLSTM can be used for action localization by relying on just the action class label. Experiments and comparisons on challenging datasets for action classification and localization support our claims.

Zhenyang Li, Efstratios Gavves, Mihir Jain, Cees G. M. Snoek• 2016

Related benchmarks

TaskDatasetResultRank
Action RecognitionHMDB-51 (average of three splits)
Top-1 Acc63
204
Action RecognitionUCF101 (3 splits)
Accuracy91.5
155
Showing 2 of 2 rows

Other info

Follow for update