Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Forecasting Human-Object Interaction: Joint Prediction of Motor Attention and Actions in First Person Video

About

We address the challenging task of anticipating human-object interaction in first person videos. Most existing methods ignore how the camera wearer interacts with the objects, or simply consider body motion as a separate modality. In contrast, we observe that the international hand movement reveals critical information about the future activity. Motivated by this, we adopt intentional hand movement as a future representation and propose a novel deep network that jointly models and predicts the egocentric hand motion, interaction hotspots and future action. Specifically, we consider the future hand motion as the motor attention, and model this attention using latent variables in our deep model. The predicted motor attention is further used to characterise the discriminative spatial-temporal visual features for predicting actions and interaction hotspots. We present extensive experiments demonstrating the benefit of the proposed joint model. Importantly, our model produces new state-of-the-art results for action anticipation on both EGTEA Gaze+ and the EPIC-Kitchens datasets. Our project page is available at https://aptx4869lm.github.io/ForecastingHOI/

Miao Liu, Siyu Tang, Yin Li, James Rehg• 2019

Related benchmarks

TaskDatasetResultRank
Action AnticipationEPIC-KITCHENS unseen S2 (test)
Top-1 Acc (Verb)29.9
47
Action AnticipationEpic-Kitchen 55 (val)--
33
Action AnticipationEPIC-KITCHENS seen S1 (test)
Top-1 Acc (Verb)36.3
27
Egocentric Action AnticipationEPIC-Kitchens-55 S1 - Seen (test)
Top-1 Acc (Verb)36.25
24
Action AnticipationEGTEA Gaze+
Top-1 Acc (Verb)49
21
Action AnticipationEGTEA Gaze+ (Split 1)
Top-1 Acc (Verb)49
9
Showing 6 of 6 rows

Other info

Follow for update