Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Action Anticipation with RBF Kernelized Feature Mapping RNN

About

We introduce a novel Recurrent Neural Network-based algorithm for future video feature generation and action anticipation called feature mapping RNN. Our novel RNN architecture builds upon three effective principles of machine learning, namely parameter sharing, Radial Basis Function kernels and adversarial training. Using only some of the earliest frames of a video, the feature mapping RNN is able to generate future features with a fraction of the parameters needed in traditional RNN. By feeding these future features into a simple multi-layer perceptron facilitated with an RBF kernel layer, we are able to accurately predict the action in the video. In our experiments, we obtain 18% improvement on JHMDB-21 dataset, 6% on UCF101-24 and 13% improvement on UT-Interaction datasets over prior state-of-the-art for action anticipation.

Yuge Shi, Basura Fernando, Richard Hartley• 2019

Related benchmarks

TaskDatasetResultRank
Egocentric Action AnticipationEPIC-KITCHENS (val)
Top-5 Action Accuracy @ 1.0s32.7
17
Human Interaction RecognitionUT-Interaction (UT-1)
Accuracy97
12
Early Action RecognitionJHMDB
Accuracy73
9
Showing 3 of 3 rows

Other info

Follow for update