Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning State-Aware Visual Representations from Audible Interactions

About

We propose a self-supervised algorithm to learn representations from egocentric video data. Recently, significant efforts have been made to capture humans interacting with their own environments as they go about their daily activities. In result, several large egocentric datasets of interaction-rich multi-modal data have emerged. However, learning representations from videos can be challenging. First, given the uncurated nature of long-form continuous videos, learning effective representations require focusing on moments in time when interactions take place. Second, visual representations of daily activities should be sensitive to changes in the state of the environment. However, current successful multi-modal learning frameworks encourage representation invariance over time. To address these challenges, we leverage audio signals to identify moments of likely interactions which are conducive to better learning. We also propose a novel self-supervised objective that learns from audible state changes caused by interactions. We validate these contributions extensively on two large-scale egocentric datasets, EPIC-Kitchens-100 and the recently released Ego4D, and show improvements on several downstream tasks, including action recognition, long-term action anticipation, and object state change classification.

Himangi Mittal, Pedro Morgado, Unnat Jain, Abhinav Gupta• 2022

Related benchmarks

TaskDatasetResultRank
Action RecognitionEPIC-KITCHENS 100 (test)
Top-1 Verb Acc31.71
101
Long-term Action AnticipationEgo4D v1 (test)
ED@Z=20 Verb0.755
31
State change classificationEgo4D v1 (test)
Accuracy66.3
29
Action RecognitionEgo4D v1 (test)
Top-1 Accuracy (Verb)23.1
23
Point-of-no-return temporal localizationEgo4D v1 (test)
Error0.772
21
Showing 5 of 5 rows

Other info

Code

Follow for update