VideoLights: Feature Refinement and Cross-Task Alignment Transformer for Joint Video Highlight Detection and Moment Retrieval
About
Prevailing joint prediction transformers for Video Highlight Detection and Moment Retrieval (HD/MR) exhibit deficiencies in handling cross-task dynamics, achieving robust video-text alignment, and utilizing effective attention mechanisms, with the potential of Large Language/Vision-Language Models (LLMs/LVLMs) being largely untapped. This paper introduces VideoLights, a novel HD/MR framework addressing these limitations by incorporating: (i) Convolutional Projection and Feature Refinement modules with an alignment loss for enhanced video-text feature congruity; (ii) a Bi-Directional Cross-Modal Fusion network for strongly coupled query-aware representations; (iii) a Uni-directional joint-task feedback mechanism for synergistic task improvement; (iv) hard positive/negative losses for adaptive learning; and (v) the leveraging of LVLMs (e.g., BLIP-2) for superior multimodal feature integration and intelligent pre-training with synthetic data. Comprehensive evaluations on QVHighlights, TVSum, and Charades-STA benchmarks demonstrate that VideoLights significantly surpasses existing baselines, establishing new state-of-the-art performances. Codes and model checkpoints are available at https://github.com/dpaul06/VideoLights .
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Moment Retrieval | Charades-STA (test) | R@0.561.96 | 172 | |
| Moment Retrieval | QVHighlights (test) | R@1 (IoU=0.5)70.36 | 170 | |
| Highlight Detection | QVHighlights (test) | HIT@170.56 | 151 | |
| Video Moment Retrieval | TACOS (test) | Recall@1 (0.5 Threshold)40.61 | 70 | |
| Highlight Detection | TVSum (test) | VT (Top-5 mAP)91.8 | 17 | |
| Moment Retrieval | Ego4D NLQ (test) | R@0.37.56 | 5 |