Span-based Localizing Network for Natural Language Video Localization
About
Given an untrimmed video and a text query, natural language video localization (NLVL) is to locate a matching span from the video that semantically corresponds to the query. Existing solutions formulate NLVL either as a ranking task and apply multimodal matching architecture, or as a regression task to directly regress the target video span. In this work, we address NLVL task with a span-based QA approach by treating the input video as text passage. We propose a video span localizing network (VSLNet), on top of the standard span-based QA framework, to address NLVL. The proposed VSLNet tackles the differences between NLVL and span-based QA through a simple yet effective query-guided highlighting (QGH) strategy. The QGH guides VSLNet to search for matching video span within a highlighted region. Through extensive experiments on three benchmark datasets, we show that the proposed VSLNet outperforms the state-of-the-art methods; and adopting span-based QA framework is a promising direction to solve NLVL.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Moment Retrieval | Charades-STA (test) | R@0.548.67 | 172 | |
| Temporal Video Grounding | Charades-STA (test) | Recall@IoU=0.542.69 | 117 | |
| Video Moment Retrieval | Charades-STA (test) | Recall@1 (IoU=0.5)54.19 | 77 | |
| Video Moment Retrieval | TACOS (test) | Recall@1 (0.5 Threshold)24.27 | 70 | |
| Temporal Grounding | Charades-STA (test) | Recall@1 (IoU=0.5)47.3 | 68 | |
| Natural Language Video Localization | Charades-STA (test) | R@1 (IoU=0.5)54.19 | 61 | |
| Moment Retrieval | TACOS (test) | Recall@1 (IoU=0.5)24.27 | 23 | |
| Natural Language Queries | Ego4D NLQ (val) | Recall@1 (IoU=0.3)0.0545 | 23 | |
| Video Grounding | Ego4D-NLQ v1 (test) | Recall@1 (IoU=0.3)10.84 | 21 | |
| Natural Language Queries | Ego4D NLQ (test) | R@1 (IoU=0.3)5.42 | 21 |