Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Opening the Vocabulary of Egocentric Actions

About

Human actions in egocentric videos are often hand-object interactions composed from a verb (performed by the hand) applied to an object. Despite their extensive scaling up, egocentric datasets still face two limitations - sparsity of action compositions and a closed set of interacting objects. This paper proposes a novel open vocabulary action recognition task. Given a set of verbs and objects observed during training, the goal is to generalize the verbs to an open vocabulary of actions with seen and novel objects. To this end, we decouple the verb and object predictions via an object-agnostic verb encoder and a prompt-based object encoder. The prompting leverages CLIP representations to predict an open vocabulary of interacting objects. We create open vocabulary benchmarks on the EPIC-KITCHENS-100 and Assembly101 datasets; whereas closed-action methods fail to generalize, our proposed method is effective. In addition, our object encoder significantly outperforms existing open-vocabulary visual recognition methods in recognizing novel interacting objects.

Dibyadip Chatterjee, Fadime Sener, Shugao Ma, Angela Yao• 2023

Related benchmarks

TaskDatasetResultRank
Compositional Action RecognitionSomething-Else (Compositional)
Top-1 Accuracy0.618
8
Open-vocabulary object recognitionEPIC100-OV
Top-1 Accuracy (Base)47.8
8
Action RecognitionEPIC100 OV Closed
Verb Top-1 Acc64.1
3
Action RecognitionEPIC100-OV (Novel)
Verb Top-1 Acc0.414
3
Action RecognitionEPIC100-OV (HM)
Verb Top-1 Acc50.3
3
Action RecognitionAssembly101-OV (Closed)
Verb Top-1 Acc57.6
3
Action RecognitionAssembly101 OV (Novel)
Verb Top-1 Acc45.1
3
Action RecognitionAssembly101-OV (HM)
Verb Top-1 Acc50.5
3
Showing 8 of 8 rows

Other info

Code

Follow for update