Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Grounded Vision-Language Representation for Versatile Understanding in Untrimmed Videos

About

Joint video-language learning has received increasing attention in recent years. However, existing works mainly focus on single or multiple trimmed video clips (events), which makes human-annotated event boundaries necessary during inference. To break away from the ties, we propose a grounded vision-language learning framework for untrimmed videos, which automatically detects informative events and effectively excavates the alignments between multi-sentence descriptions and corresponding event segments. Instead of coarse-level video-language alignments, we present two dual pretext tasks to encourage fine-grained segment-level alignments, i.e., text-to-event grounding (TEG) and event-to-text generation (ETG). TEG learns to adaptively ground the possible event proposals given a set of sentences by estimating the cross-modal distance in a joint semantic space. Meanwhile, ETG aims to reconstruct (generate) the matched texts given event proposals, encouraging the event representation to retain meaningful semantic information. To encourage accurate label assignment between the event set and the text set, we propose a novel semantic-aware cost to mitigate the sub-optimal matching results caused by ambiguous boundary annotations. Our framework is easily extensible to tasks covering visually-grounded language understanding and generation. We achieve state-of-the-art dense video captioning performance on ActivityNet Captions, YouCook2 and YouMakeup, and competitive performance on several other language generation and understanding tasks. Our method also achieved 1st place in both the MTVG and MDVC tasks of the PIC 4th Challenge. Our code is publicly available at https://github.com/zjr2000/GVL.

Teng Wang, Jinrui Zhang, Feng Zheng, Wenhao Jiang, Ran Cheng, Ping Luo• 2023

Related benchmarks

TaskDatasetResultRank
Video Moment RetrievalTACOS (test)
Recall@1 (0.5 Threshold)34.57
70
Dense Video CaptioningActivityNet Captions
METEOR10.03
43
Video CaptioningActivityNet Captions (val)
METEOR10.03
22
Dense Video CaptioningYouCook2 (val)
METEOR5.01
19
Single-sentence video groundingActivityNet Captions
IoU@0.549.18
17
Single-sentence video groundingTACOS
IoU @ 0.5 Threshold34.57
16
Video Paragraph CaptioningActivityNet Captions
BLEU@411.7
9
Video Moment RetrievalActivityNet-Captions (val 2)
R1@0.549.18
7
Multi-sentence video groundingActivityNet-Captions (test)
IoU@0.560.67
6
Multi-sentence video groundingTACOS (test)
IoU @ 0.348.29
5
Showing 10 of 14 rows

Other info

Code

Follow for update