Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Divide, then Ground: Adapting Frame Selection to Query Types for Long-Form Video Understanding

About

The application of Large Multimodal Models (LMMs) to long-form video understanding is constrained by limited context lengths and the computationally prohibitive cost of processing dense video tokens. Consequently, recent research has focused on query-aware frame selection, methods that often incur significant computational overhead. This paper challenges the assumption that such complex search mechanisms are universally necessary. We first identify and validate a query typology distinguishing between global query and localized query. We demonstrate that while uniform sampling is both effective and efficient for global queries, localized queries indeed necessitate query-aware selection for optimal performance. Building on this insight, we propose DIG, a training-free frame selection framework that adapts its strategy based on the query type. Specifically,DIG employs efficient uniform sampling for global queries while activating a specialized pipeline to extract query-relevant frames for localized queries. Experiments on three long-form video understanding benchmarks demonstrate that DIG consistently outperforms existing baselines and robustly improves LMM performance, even when scaling the input frame count to 256.

Jialuo Li, Bin Li, Jiahao Li, Yan Lu• 2025

Related benchmarks

TaskDatasetResultRank
Video UnderstandingVideoMME--
127
Long Video UnderstandingLongVideoBench
Score64.6
110
Long Video UnderstandingMLVU--
72
Video Question AnsweringMLVU 78 (test)
Accuracy76.66
45
Video Question AnsweringLongVideoBench (LVB) 58 (test)
Accuracy66.42
45
Video Question AnsweringVideoMME 16 (test)
Medium Length Score70.11
45
Video UnderstandingVideoMME Long
Score59
25
Showing 7 of 7 rows

Other info

GitHub

Follow for update