Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Chain-of-Frames: Advancing Video Understanding in Multimodal LLMs via Frame-Aware Reasoning

About

Recent work has shown that eliciting Large Language Models (LLMs) to generate reasoning traces in natural language before answering the user's request can significantly improve their performance across tasks. This approach has been extended to multimodal LLMs, where the models can produce chains-of-thoughts (CoT) about the content of input images and videos. For video inputs, prior works use complex multi-step pipelines that extract and include relevant frames from videos in the CoT, or produce simpler single-stage reasoning traces at the expense of poor temporal grounding. Here, we propose the first video LLMs with single-stage reasoning that includes explicit references to relevant frames, thereby reducing temporal inconsistencies in the reasoning process. Our approach is simple, unified, and self-contained, employing a single-stage inference to handle complex video understanding tasks without relying on auxiliary modules for frame selection or caption generation. For this, we first create COF-DATA, a large dataset of diverse questions, answers, and corresponding frame-grounded reasoning traces from both natural and synthetic videos, spanning various topics and tasks. Our models, obtained fine-tuning video LLMs on this chain-of-frames (CoF) data, generate reasoning traces that accurately identify key frames to answer given questions. In turn, this consistently improves performance across multiple video understanding benchmarks. Surprisingly, we find that synthetic data alone, despite being out-of-distribution with respect to these real-world benchmarks, provides a significant boost in model accuracy. Code is available at https://github.com/SaraGhazanfari/CoF.

Sara Ghazanfari, Francesco Croce, Nicolas Flammarion, Prashanth Krishnamurthy, Farshad Khorrami, Siddharth Garg• 2025

Related benchmarks

TaskDatasetResultRank
Multi-modal Video UnderstandingMVBench--
70
Multi-modal Video UnderstandingVideoMME
Accuracy73.7
50
Video spatial reasoningVSI-Bench
Average Score72.1
32
Video Hallucination EvaluationEventHallusion--
29
Video Hallucination EvaluationVidHal
Accuracy79.5
11
Multimodal SummarizationVIEWS
BLEU-44.09
5
Multimodal SummarizationMM-AVS
BLEU-44.98
5
Multimodal SummarizationXMSMO
BLEU-40.51
5
Multimodal SummarizationTIB
BLEU-41.66
5
Multimodal SummarizationVista
BLEU-43.69
5
Showing 10 of 12 rows

Other info

Follow for update