Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Span-based Localizing Network for Natural Language Video Localization

About

Given an untrimmed video and a text query, natural language video localization (NLVL) is to locate a matching span from the video that semantically corresponds to the query. Existing solutions formulate NLVL either as a ranking task and apply multimodal matching architecture, or as a regression task to directly regress the target video span. In this work, we address NLVL task with a span-based QA approach by treating the input video as text passage. We propose a video span localizing network (VSLNet), on top of the standard span-based QA framework, to address NLVL. The proposed VSLNet tackles the differences between NLVL and span-based QA through a simple yet effective query-guided highlighting (QGH) strategy. The QGH guides VSLNet to search for matching video span within a highlighted region. Through extensive experiments on three benchmark datasets, we show that the proposed VSLNet outperforms the state-of-the-art methods; and adopting span-based QA framework is a promising direction to solve NLVL.

Hao Zhang, Aixin Sun, Wei Jing, Joey Tianyi Zhou• 2020

Related benchmarks

TaskDatasetResultRank
Moment RetrievalCharades-STA (test)
R@0.548.67
186
Temporal Video GroundingCharades-STA (test)
Recall@IoU=0.542.69
124
Video Moment RetrievalTACOS (test)
Recall@1 (0.5 Threshold)24.27
79
Video Moment RetrievalCharades-STA (test)
Recall@1 (IoU=0.5)54.19
77
Temporal GroundingActivityNet Captions
Recall@1 (IoU=0.5)43.22
75
Temporal GroundingCharades-STA (test)
Recall@1 (IoU=0.5)47.3
68
Natural Language Video LocalizationCharades-STA (test)
R@1 (IoU=0.5)54.19
61
Video GroundingEgo4D-NLQ v1 (test)
Recall@1 (Avg)11.93
27
Temporal GroundingEgo4D-NLQ
R@1 (IoU=0.3)10.84
25
Moment RetrievalTACOS (test)
Recall@1 (IoU=0.5)24.27
23
Showing 10 of 45 rows

Other info

Code

Follow for update