Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Grounded Human-Object Interaction Hotspots from Video

About

Learning how to interact with objects is an important step towards embodied visual intelligence, but existing techniques suffer from heavy supervision or sensing requirements. We propose an approach to learn human-object interaction "hotspots" directly from video. Rather than treat affordances as a manually supervised semantic segmentation task, our approach learns about interactions by watching videos of real human behavior and anticipating afforded actions. Given a novel image or video, our model infers a spatial hotspot map indicating how an object would be manipulated in a potential interaction-- even if the object is currently at rest. Through results with both first and third person video, we show the value of grounding affordances in real human-object interactions. Not only are our weakly supervised hotspots competitive with strongly supervised affordance methods, but they can also anticipate object interaction for novel object categories.

Tushar Nagarajan, Christoph Feichtenhofer, Kristen Grauman• 2018

Related benchmarks

TaskDatasetResultRank
GraspingEpic-Kitchens (Held-out Rare Objects)
Success Rate20
20
Region-of-Interaction PredictionEPIC-ROI Non-COCO Parts 1.0 (test)
AP0.096
16
Region-of-Interaction PredictionEPIC-ROI COCO Parts 1.0 (test)
AP16.7
16
Region-of-Interaction PredictionEPIC-ROI Non-COCO Objects 1.0 (test)
AP29.5
16
Region-of-Interaction PredictionEPIC-ROI Overall 1.0 (test)
AP52
16
Region-of-Interaction PredictionEPIC-ROI COCO Objects 1.0 (test)
AP33.9
16
Affordance GroundingOPRA 28 x 28 (test)
KLD1.42
11
Imitation LearningPot (unseen)
Success Rate0.8
10
Imitation LearningLid (unseen)
Success Rate0.3
10
Affordance GroundingEPIC-Hotspots 28 x 28 (test)
KLD1.26
10
Showing 10 of 29 rows

Other info

Code

Follow for update