Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Q-Frame: Query-aware Frame Selection and Multi-Resolution Adaptation for Video-LLMs

About

Multimodal Large Language Models (MLLMs) have demonstrated significant success in visual understanding tasks. However, challenges persist in adapting these models for video comprehension due to the large volume of data and temporal complexity. Existing Video-LLMs using uniform frame sampling often struggle to capture the query-related crucial spatiotemporal clues of videos effectively. In this paper, we introduce Q-Frame, a novel approach for adaptive frame selection and multi-resolution scaling tailored to the video's content and the specific query. Q-Frame employs a training-free, plug-and-play strategy generated by a text-image matching network like CLIP, utilizing the Gumbel-Max trick for efficient frame selection. Q-Frame allows Video-LLMs to process more frames without exceeding computational limits, thereby preserving critical temporal and spatial information. We demonstrate Q-Frame's effectiveness through extensive experiments on benchmark datasets, including MLVU, LongVideoBench, and Video-MME, illustrating its superiority over existing methods and its applicability across various video understanding tasks.

Shaojie Zhang, Jiahui Yang, Jianqin Yin, Zhenbo Luo, Jian Luan• 2025

Related benchmarks

TaskDatasetResultRank
Video Question AnsweringVideoMME 16 (test)
Medium Length Score68.21
45
Video Question AnsweringMLVU 78 (test)
Accuracy70.1
45
Video Question AnsweringLongVideoBench (LVB) 58 (test)
Accuracy60.06
45
Video Question AnsweringVideoMME w/o sub. 1.0 (test)
Overall Acc58.3
8
Video Question AnsweringMLVU 1.0 (test)
Accuracy65.4
6
Video Question AnsweringLongVideoBench 1.0 (test)
Accuracy58.4
6
Showing 6 of 6 rows

Other info

Follow for update