Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multilevel Language and Vision Integration for Text-to-Clip Retrieval

About

We address the problem of text-based activity retrieval in video. Given a sentence describing an activity, our task is to retrieve matching clips from an untrimmed video. To capture the inherent structures present in both text and video, we introduce a multilevel model that integrates vision and language features earlier and more tightly than prior work. First, we inject text features early on when generating clip proposals, to help eliminate unlikely clips and thus speed up processing and boost performance. Second, to learn a fine-grained similarity metric for retrieval, we use visual features to modulate the processing of query sentences at the word level in a recurrent neural network. A multi-task loss is also employed by adding query re-generation as an auxiliary task. Our approach significantly outperforms prior work on two challenging benchmarks: Charades-STA and ActivityNet Captions.

Huijuan Xu, Kun He, Bryan A. Plummer, Leonid Sigal, Stan Sclaroff, Kate Saenko• 2018

Related benchmarks

TaskDatasetResultRank
Video GroundingCharades-STA
R@1 IoU=0.535.6
113
Natural Language Video LocalizationCharades-STA (test)
R@1 (IoU=0.5)35.6
61
Video GroundingTACOS
Recall@1 (IoU=0.5)23.27
45
Temporal GroundingActivityNet Captions
Recall@1 (IoU=0.5)33.26
45
Video GroundingActivityNet Captions
R@1 (IoU=0.5)27.7
43
Video GroundingTACOS
IoU@0.515.23
19
Single-sentence video groundingActivityNet Captions
IoU@0.533.26
17
Natural Language Video LocalizationActivityNet Caption (test)
IoU @ 0.527.7
16
Single-sentence video groundingTACOS
IoU @ 0.5 Threshold15.23
16
Video GroundingActivityNet Caption
IoU@0.533.26
14
Showing 10 of 12 rows

Other info

Follow for update