Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Joint Hand Motion and Interaction Hotspots Prediction from Egocentric Videos

About

We propose to forecast future hand-object interactions given an egocentric video. Instead of predicting action labels or pixels, we directly predict the hand motion trajectory and the future contact points on the next active object (i.e., interaction hotspots). This relatively low-dimensional representation provides a concrete description of future interactions. To tackle this task, we first provide an automatic way to collect trajectory and hotspots labels on large-scale data. We then use this data to train an Object-Centric Transformer (OCT) model for prediction. Our model performs hand and object interaction reasoning via the self-attention mechanism in Transformers. OCT also provides a probabilistic framework to sample the future trajectory and hotspots to handle uncertainty in prediction. We perform experiments on the Epic-Kitchens-55, Epic-Kitchens-100, and EGTEA Gaze+ datasets, and show that OCT significantly outperforms state-of-the-art approaches by a large margin. Project page is available at https://stevenlsw.github.io/hoi-forecast .

Shaowei Liu, Subarna Tripathi, Somdeb Majumdar, Xiaolong Wang• 2022

Related benchmarks

TaskDatasetResultRank
Imitation LearningKnife (unseen)
Success Rate0.1
10
Imitation LearningShelf (unseen)
Success Rate0.6
10
Imitation LearningDoor (unseen)
Success Rate40
10
Imitation LearningVeg (unseen)
Success Rate30
10
Imitation LearningDrawer (unseen)
Success Rate60
10
Imitation LearningLid (unseen)
Success Rate0.00e+0
10
Imitation LearningCabinet (unseen)
Success Rate30
10
Imitation LearningPot (unseen)
Success Rate0.1
10
Opening the microwaveFranka Kitchen D4RL
Success Rate0.45
5
Turning the light onFranka Kitchen D4RL
Success Rate0.6
5
Showing 10 of 11 rows

Other info

Follow for update