Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ViTCoT: Video-Text Interleaved Chain-of-Thought for Boosting Video Understanding in Large Language Models

About

Video understanding plays a vital role in bridging low-level visual signals with high-level cognitive reasoning, and is fundamental to applications such as autonomous driving, embodied AI, and the broader pursuit of AGI. The rapid development of large language models (LLMs), particularly those utilizing Chain-of-Thought (CoT) technology, has significantly advanced video reasoning capabilities. However, current approaches primarily depend on textual information for reasoning, overlooking the visual modality in the actual video reasoning process. In contrast, humans naturally re-examine visual content while reasoning. Motivated by this, we introduce a novel video reasoning paradigm: Video-Text Interleaved CoT (ViTCoT), which facilitates more intuitive and cognitively aligned reasoning. To the end, first, we construct the Video-Text Interleaved Benchmark (ViTIB), which is created using MLLMs for key-video selection and manually verified. Furthermore, we extensively explore the potential of the ViTCoT paradigm in the video understanding field. Extensive experiments demonstrate that ViTCoT significantly enhances performance compared to the traditional text-only CoT paradigm and effectively activates more neuron values in MLLMs.

Yongheng Zhang, Xu Liu, Ruihan Tao, Qiguang Chen, Hao Fei, Wanxiang Che, Libo Qin• 2025

Related benchmarks

TaskDatasetResultRank
Multimodal SummarizationVista
BLEU-44.28
5
Multimodal SummarizationSoccerNet
BLEU-41.46
5
Multimodal SummarizationVIEWS
BLEU-43.5
5
Multimodal SummarizationXMSMO
BLEU-41.67
5
Multimodal SummarizationTIB
BLEU-41.68
5
Multimodal SummarizationMM-AVS
BLEU-44.26
5
Multimodal SummarizationSumm.
BLEU-40.8
5
Showing 7 of 7 rows

Other info

Follow for update