Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Rethinking Chain-of-Thought Reasoning for Videos

About

Chain-of-thought (CoT) reasoning has been highly successful in solving complex tasks in natural language processing, and recent multimodal large language models (MLLMs) have extended this paradigm to video reasoning. However, these models typically build on lengthy reasoning chains and large numbers of input visual tokens. Motivated by empirical observations from our benchmark study, we hypothesize that concise reasoning combined with a reduced set of visual tokens can be sufficient for effective video reasoning. To evaluate this hypothesis, we design and validate an efficient post-training and inference framework that enhances a video MLLM's reasoning capability. Our framework enables models to operate on compressed visual tokens and generate brief reasoning traces prior to answering. The resulting models achieve substantially improved inference efficiency, deliver competitive performance across diverse benchmarks, and avoid reliance on manual CoT annotations or supervised fine-tuning. Collectively, our results suggest that long, human-like CoT reasoning may not be necessary for general video reasoning, and that concise reasoning can be both effective and efficient. Our code will be released at https://github.com/LaVi-Lab/Rethink_CoT_Video.

Yiwu Zhong, Zi-Yuan Hu, Yin Li, Liwei Wang• 2025

Related benchmarks

TaskDatasetResultRank
Video UnderstandingMVBench--
247
Video UnderstandingVideoMME
Overall Score60.6
192
Long Video UnderstandingLongVideoBench
Score55.7
110
Long Video UnderstandingMLVU--
72
Video UnderstandingEgoSchema--
49
Video ReasoningVideo-Holmes
Score41.6
20
Video UnderstandingVideo-TT
Score40.4
6
Multi-modal Video UnderstandingMMVU
Score63.6
6
Showing 8 of 8 rows

Other info

GitHub

Follow for update