Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MH-DETR: Video Moment and Highlight Detection with Cross-modal Transformer

About

With the increasing demand for video understanding, video moment and highlight detection (MHD) has emerged as a critical research topic. MHD aims to localize all moments and predict clip-wise saliency scores simultaneously. Despite progress made by existing DETR-based methods, we observe that these methods coarsely fuse features from different modalities, which weakens the temporal intra-modal context and results in insufficient cross-modal interaction. To address this issue, we propose MH-DETR (Moment and Highlight Detection Transformer) tailored for MHD. Specifically, we introduce a simple yet efficient pooling operator within the uni-modal encoder to capture global intra-modal context. Moreover, to obtain temporally aligned cross-modal features, we design a plug-and-play cross-modal interaction module between the encoder and decoder, seamlessly integrating visual and textual features. Comprehensive experiments on QVHighlights, Charades-STA, Activity-Net, and TVSum datasets show that MH-DETR outperforms existing state-of-the-art methods, demonstrating its effectiveness and superiority. Our code is available at https://github.com/YoucanBaby/MH-DETR.

Yifang Xu, Yunzhuo Sun, Yang Li, Yilei Shi, Xiaoxiang Zhu, Sidan Du• 2023

Related benchmarks

TaskDatasetResultRank
Moment RetrievalCharades-STA (test)
R@0.555.47
172
Moment RetrievalQVHighlights (test)
R@1 (IoU=0.5)60.1
170
Highlight DetectionQVHighlights (test)
HIT@160.51
151
Video GroundingQVHighlights (test)
mAP (IoU=0.5)60.75
64
Video Moment RetrievalCharades-STA
R1@0.556.4
44
Video Temporal GroundingQVHighlights (val)
mAP (Avg)39.26
25
Highlight DetectionTVSum (test)
VT (Top-5 mAP)86.1
17
Showing 7 of 7 rows

Other info

Follow for update