Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LLaVA-MR: Large Language-and-Vision Assistant for Video Moment Retrieval

About

Multimodal Large Language Models (MLLMs) are widely used for visual perception, understanding, and reasoning. However, long video processing and precise moment retrieval remain challenging due to LLMs' limited context size and coarse frame extraction. We propose the Large Language-and-Vision Assistant for Moment Retrieval (LLaVA-MR), which enables accurate moment retrieval and contextual grounding in videos using MLLMs. LLaVA-MR combines Dense Frame and Time Encoding (DFTE) for spatial-temporal feature extraction, Informative Frame Selection (IFS) for capturing brief visual and motion patterns, and Dynamic Token Compression (DTC) to manage LLM context limitations. Evaluations on benchmarks like Charades-STA and QVHighlights demonstrate that LLaVA-MR outperforms 11 state-of-the-art methods, achieving an improvement of 1.82% in R1@0.5 and 1.29% in mAP@0.5 on the QVHighlights dataset. Our implementation will be open-sourced upon acceptance.

Weiheng Lu, Jian Li, An Yu, Ming-Ching Chang, Shengpeng Ji, Min Xia• 2024

Related benchmarks

TaskDatasetResultRank
Moment RetrievalQVHighlights (test)
R@1 (IoU=0.5)76.59
170
Video Moment RetrievalCharades-STA (test)
Recall@1 (IoU=0.5)70.65
77
Moment RetrievalQVHighlights (val)
R@1 (IoU=0.5)78.13
53
Video Moment RetrievalCharades-STA
R1@0.570.65
44
Moment RetrievalQVHighlights v1 (test)
R1@0.576.59
19
Video Moment RetrievalActivityNet-Captions (val 2)
R1@0.555.16
7
Video Moment RetrievalQVHighlights 1.0 (val)
R@1 (IoU=0.5)78.13
7
Video Moment RetrievalActivityNet-Captions (test)
R1@0.555.16
6
Showing 8 of 8 rows

Other info

Follow for update