Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

See More, Store Less: Memory-Efficient Resolution for Video Moment Retrieval

About

Recent advances in Multimodal Large Language Models (MLLMs) have improved image recognition and reasoning, but video-related tasks remain challenging due to memory constraints from dense frame processing. Existing Video Moment Retrieval (VMR) methodologies rely on sparse frame sampling, risking potential information loss, especially in lengthy videos. We propose SMORE (See MORE, store less), a framework that enhances memory efficiency while maintaining high information resolution. SMORE (1) uses query-guided captions to encode semantics aligned with user intent, (2) applies query-aware importance modulation to highlight relevant segments, and (3) adaptively compresses frames to preserve key content while reducing redundancy. This enables efficient video understanding without exceeding memory budgets. Experimental validation reveals that SMORE achieves state-of-the-art performance on QVHighlights, Charades-STA, and ActivityNet-Captions benchmarks.

Mingyu Jeon, Sungjin Han, Jinkwon Hwang, Minchol Kwon, Jonghee Kim, Junyeong Kim• 2026

Related benchmarks

TaskDatasetResultRank
Video Moment RetrievalCharades-STA
R1@0.571.26
44
Moment RetrievalQVHighlights v1 (test)
R1@0.576.39
19
Video Moment RetrievalQVHighlights 1.0 (val)
R@1 (IoU=0.5)78.84
7
Video Moment RetrievalActivityNet-Captions (test)
R1@0.556.31
6
Showing 4 of 4 rows

Other info

Follow for update