Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Attention is All We Need: Nailing Down Object-centric Attention for Egocentric Activity Recognition

About

In this paper we propose an end-to-end trainable deep neural network model for egocentric activity recognition. Our model is built on the observation that egocentric activities are highly characterized by the objects and their locations in the video. Based on this, we develop a spatial attention mechanism that enables the network to attend to regions containing objects that are correlated with the activity under consideration. We learn highly specialized attention maps for each frame using class-specific activations from a CNN pre-trained for generic image recognition, and use them for spatio-temporal encoding of the video with a convolutional LSTM. Our model is trained in a weakly supervised setting using raw video-level activity-class labels. Nonetheless, on standard egocentric activity benchmarks our model surpasses by up to +6% points recognition accuracy the currently best performing method that leverages hand segmentation and object location strong supervision for training. We visually analyze attention maps generated by the network, revealing that the network successfully identifies the relevant objects present in the video frames which may explain the strong recognition performance. We also discuss an extensive ablation analysis regarding the design choices.

Swathikiran Sudhakaran, Oswald Lanz• 2018

Related benchmarks

TaskDatasetResultRank
Action RecognitionEGTEA Gaze+
Accuracy60.76
18
Activity RecognitionEGTEA Gaze+ (Split 2)
Accuracy61.5
15
Activity RecognitionEGTEA Gaze+ (Split 1)
Accuracy62.2
15
Activity RecognitionEGTEA Gaze+ (Split 3)
Accuracy58.63
15
Egocentric Activity RecognitionGTEA 61
Accuracy79
14
Egocentric Activity RecognitionGTEA 61 (fixed split)
Accuracy77.59
13
Egocentric Activity RecognitionGTEA 71
Accuracy77
13
Egocentric Activity RecognitionEGTEA
Recognition Accuracy0.6076
8
Egocentric Activity RecognitionGTEA Gaze+ (leave-one-subject-out cross val)
Accuracy*60.13
8
Egocentric Action RecognitionEGTEA Average
Accuracy60.8
6
Showing 10 of 16 rows

Other info

Code

Follow for update