Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Look at What I'm Doing: Self-Supervised Spatial Grounding of Narrations in Instructional Videos

About

We introduce the task of spatially localizing narrated interactions in videos. Key to our approach is the ability to learn to spatially localize interactions with self-supervision on a large corpus of videos with accompanying transcribed narrations. To achieve this goal, we propose a multilayer cross-modal attention network that enables effective optimization of a contrastive loss during training. We introduce a divided strategy that alternates between computing inter- and intra-modal attention across the visual and natural language modalities, which allows effective training via directly contrasting the two modalities' representations. We demonstrate the effectiveness of our approach by self-training on the HowTo100M instructional video dataset and evaluating on a newly collected dataset of localized described interactions in the YouCook2 dataset. We show that our approach outperforms alternative baselines, including shallow co-attention and full cross-modal attention. We also apply our approach to grounding phrases in images with weak supervision on Flickr30K and show that stacking multiple attention layers is effective and, when combined with a word-to-region loss, achieves state of the art on recall-at-one and pointing hand accuracies.

Reuben Tan, Bryan A. Plummer, Kate Saenko, Hailin Jin, Bryan Russell• 2021

Related benchmarks

TaskDatasetResultRank
Action GroundingYouCook-Interactions (val)
Accuracy52.65
13
Action GroundingDaly (test)
Accuracy61.06
13
Action GroundingV-HICO (test)
Accuracy55.2
13
Action GroundingGrounding YouTube (test)
Accuracy47.56
11
Interaction LocalizationYouCook2 Interactions 1.0 (test)
Localization Accuracy55.8
8
Interaction LocalizationYouCook2 Interactions (val)
Localization Accuracy55.8
4
Object LocalizationYouCook2-BB
Full Localization0.5925
2
Showing 7 of 7 rows

Other info

Code

Follow for update