Divide, then Ground: Adapting Frame Selection to Query Types for Long-Form Video Understanding
About
The application of Large Multimodal Models (LMMs) to long-form video understanding is constrained by limited context lengths and the computationally prohibitive cost of processing dense video tokens. Consequently, recent research has focused on query-aware frame selection, methods that often incur significant computational overhead. This paper challenges the assumption that such complex search mechanisms are universally necessary. We first identify and validate a query typology distinguishing between global query and localized query. We demonstrate that while uniform sampling is both effective and efficient for global queries, localized queries indeed necessitate query-aware selection for optimal performance. Building on this insight, we propose DIG, a training-free frame selection framework that adapts its strategy based on the query type. Specifically,DIG employs efficient uniform sampling for global queries while activating a specialized pipeline to extract query-relevant frames for localized queries. Experiments on three long-form video understanding benchmarks demonstrate that DIG consistently outperforms existing baselines and robustly improves LMM performance, even when scaling the input frame count to 256.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Video Understanding | VideoMME | -- | 127 | |
| Long Video Understanding | LongVideoBench | Score64.6 | 110 | |
| Long Video Understanding | MLVU | -- | 72 | |
| Video Question Answering | MLVU 78 (test) | Accuracy76.66 | 45 | |
| Video Question Answering | LongVideoBench (LVB) 58 (test) | Accuracy66.42 | 45 | |
| Video Question Answering | VideoMME 16 (test) | Medium Length Score70.11 | 45 | |
| Video Understanding | VideoMME Long | Score59 | 25 |