Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Human Hands as Probes for Interactive Object Understanding

About

Interactive object understanding, or what we can do to objects and how is a long-standing goal of computer vision. In this paper, we tackle this problem through observation of human hands in in-the-wild egocentric videos. We demonstrate that observation of what human hands interact with and how can provide both the relevant data and the necessary supervision. Attending to hands, readily localizes and stabilizes active objects for learning and reveals places where interactions with objects occur. Analyzing the hands shows what we can do to objects and how. We apply these basic principles on the EPIC-KITCHENS dataset, and successfully learn state-sensitive features, and object affordances (regions of interaction and afforded grasps), purely by observing hands in egocentric videos.

Mohit Goyal, Sahil Modi, Rishabh Goyal, Saurabh Gupta• 2021

Related benchmarks

TaskDatasetResultRank
Object State ClassificationEPIC-STATES Novel Objects (test)
mAP81.8
16
Object State ClassificationEPIC-STATES All Objects (test)
mAP0.848
16
Region-of-Interaction PredictionEPIC-ROI Overall 1.0 (test)
AP76.4
16
Region-of-Interaction PredictionEPIC-ROI COCO Objects 1.0 (test)
AP83
16
Region-of-Interaction PredictionEPIC-ROI COCO Parts 1.0 (test)
AP43.7
16
Region-of-Interaction PredictionEPIC-ROI Non-COCO Parts 1.0 (test)
AP0.114
16
Region-of-Interaction PredictionEPIC-ROI Non-COCO Objects 1.0 (test)
AP44.7
16
Imitation LearningVeg (unseen)
Success Rate40
10
Imitation LearningCabinet (unseen)
Success Rate50
10
Imitation LearningKnife (unseen)
Success Rate0.00e+0
10
Showing 10 of 18 rows

Other info

Code

Follow for update