Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Dense Regression Network for Video Grounding

About

We address the problem of video grounding from natural language queries. The key challenge in this task is that one training video might only contain a few annotated starting/ending frames that can be used as positive examples for model training. Most conventional approaches directly train a binary classifier using such imbalance data, thus achieving inferior results. The key idea of this paper is to use the distances between the frame within the ground truth and the starting (ending) frame as dense supervisions to improve the video grounding accuracy. Specifically, we design a novel dense regression network (DRN) to regress the distances from each frame to the starting (ending) frame of the video segment described by the query. We also propose a simple but effective IoU regression head module to explicitly consider the localization quality of the grounding results (i.e., the IoU between the predicted location and the ground truth). Experimental results show that our approach significantly outperforms state-of-the-arts on three datasets (i.e., Charades-STA, ActivityNet-Captions, and TACoS).

Runhao Zeng, Haoming Xu, Wenbing Huang, Peihao Chen, Mingkui Tan, Chuang Gan• 2020

Related benchmarks

TaskDatasetResultRank
Video GroundingCharades-STA
R@1 IoU=0.553.09
113
Video Moment RetrievalCharades-STA (test)
Recall@1 (IoU=0.5)45.4
77
Video Moment RetrievalTACOS (test)
Recall@1 (0.5 Threshold)23.17
70
Temporal GroundingCharades-STA (test)
Recall@1 (IoU=0.5)42.9
68
Natural Language Video LocalizationCharades-STA (test)
R@1 (IoU=0.5)53.09
61
Temporal GroundingActivityNet Captions
Recall@1 (IoU=0.5)45.45
45
Video GroundingTACOS
Recall@1 (IoU=0.5)23.17
45
Video GroundingActivityNet Captions
R@1 (IoU=0.5)45.45
43
Video GroundingTACOS
IoU@0.523.17
19
Single-sentence video groundingActivityNet Captions
IoU@0.545.45
17
Showing 10 of 23 rows

Other info

Follow for update