Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Number it: Temporal Grounding Videos like Flipping Manga

About

Video Large Language Models (Vid-LLMs) have made remarkable advancements in comprehending video content for QA dialogue. However, they struggle to extend this visual understanding to tasks requiring precise temporal localization, known as Video Temporal Grounding (VTG). To address this gap, we introduce Number-Prompt (NumPro), a novel method that empowers Vid-LLMs to bridge visual comprehension with temporal grounding by adding unique numerical identifiers to each video frame. Treating a video as a sequence of numbered frame images, NumPro transforms VTG into an intuitive process: flipping through manga panels in sequence. This allows Vid-LLMs to "read" event timelines, accurately linking visual content with corresponding temporal information. Our experiments demonstrate that NumPro significantly boosts VTG performance of top-tier Vid-LLMs without additional computational cost. Furthermore, fine-tuning on a NumPro-enhanced dataset defines a new state-of-the-art for VTG, surpassing previous top-performing methods by up to 6.9\% in mIoU for moment retrieval and 8.5\% in mAP for highlight detection. The code will be available at https://github.com/yongliang-wu/NumPro.

Yongliang Wu, Xinting Hu, Yuyang Sun, Yizhou Zhou, Wenbo Zhu, Fengyun Rao, Bernt Schiele, Xu Yang• 2024

Related benchmarks

TaskDatasetResultRank
Highlight DetectionQVHighlights (test)
HIT@170.7
151
Temporal Video GroundingCharades-STA (test)
Recall@IoU=0.542
117
Temporal Video GroundingCharades-STA
Rank-1 Recall (IoU=0.5)42
33
Video highlight detectionQVHighlights
mAP0.25
29
Temporal Video GroundingActivityNet (test)
Recall @ 0.537.5
27
Video Moment RetrievalActivityNet-Captions (test)
R1@0.537.5
6
Video Question AnsweringMVBench
Scene Transition80
1
Video Question AnsweringVideo-MME
Temporal Reasoning Accuracy49.7
1
Showing 8 of 8 rows

Other info

Code

Follow for update