Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mutual Context Network for Jointly Estimating Egocentric Gaze and Actions

About

In this work, we address two coupled tasks of gaze prediction and action recognition in egocentric videos by exploring their mutual context. Our assumption is that in the procedure of performing a manipulation task, what a person is doing determines where the person is looking at, and the gaze point reveals gaze and non-gaze regions which contain important and complementary information about the undergoing action. We propose a novel mutual context network (MCN) that jointly learns action-dependent gaze prediction and gaze-guided action recognition in an end-to-end manner. Experiments on public egocentric video datasets demonstrate that our MCN achieves state-of-the-art performance of both gaze prediction and action recognition.

Yifei Huang, Zhenqiang Li, Minjie Cai, Yoichi Sato• 2019

Related benchmarks

TaskDatasetResultRank
Egocentric Activity RecognitionGTEA Gaze+ (leave-one-subject-out cross val)--
8
Egocentric Activity RecognitionEGTEA (1)
Accuracy55.63
5
Showing 2 of 2 rows

Other info

Follow for update