Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VideoLights: Feature Refinement and Cross-Task Alignment Transformer for Joint Video Highlight Detection and Moment Retrieval

About

Prevailing joint prediction transformers for Video Highlight Detection and Moment Retrieval (HD/MR) exhibit deficiencies in handling cross-task dynamics, achieving robust video-text alignment, and utilizing effective attention mechanisms, with the potential of Large Language/Vision-Language Models (LLMs/LVLMs) being largely untapped. This paper introduces VideoLights, a novel HD/MR framework addressing these limitations by incorporating: (i) Convolutional Projection and Feature Refinement modules with an alignment loss for enhanced video-text feature congruity; (ii) a Bi-Directional Cross-Modal Fusion network for strongly coupled query-aware representations; (iii) a Uni-directional joint-task feedback mechanism for synergistic task improvement; (iv) hard positive/negative losses for adaptive learning; and (v) the leveraging of LVLMs (e.g., BLIP-2) for superior multimodal feature integration and intelligent pre-training with synthetic data. Comprehensive evaluations on QVHighlights, TVSum, and Charades-STA benchmarks demonstrate that VideoLights significantly surpasses existing baselines, establishing new state-of-the-art performances. Codes and model checkpoints are available at https://github.com/dpaul06/VideoLights .

Dhiman Paul, Md Rizwan Parvez, Nabeel Mohammed, Shafin Rahman• 2024

Related benchmarks

TaskDatasetResultRank
Moment RetrievalCharades-STA (test)
R@0.561.96
172
Moment RetrievalQVHighlights (test)
R@1 (IoU=0.5)70.36
170
Highlight DetectionQVHighlights (test)
HIT@170.56
151
Video Moment RetrievalTACOS (test)
Recall@1 (0.5 Threshold)40.61
70
Highlight DetectionTVSum (test)
VT (Top-5 mAP)91.8
17
Moment RetrievalEgo4D NLQ (test)
R@0.37.56
5
Showing 6 of 6 rows

Other info

Code

Follow for update