Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Video-R2: Reinforcing Consistent and Grounded Reasoning in Multimodal Language Models

About

Reasoning over dynamic visual content remains a central challenge for multimodal large language models. Recent thinking models generate explicit reasoning traces for interpretability; however, their reasoning often appears convincing while being logically inconsistent or weakly grounded in visual evidence. We identify and formalize these issues through two diagnostic metrics: Think Answer Consistency (TAC), which measures the alignment between reasoning and answers, and Video Attention Score (VAS), which captures the extent to which reasoning depends on visual versus textual cues. Analysis across 11 video reasoning benchmarks shows that current models rely heavily on linguistic priors rather than visual content. To address this, we propose a reinforcement learning approach that enhances both temporal precision and reasoning consistency. Our approach combines timestamp aware supervised fine tuning with Group Relative Policy Optimization (GRPO) guided by a novel Temporal Alignment Reward (TAR). This dual step post training stage encourages temporally aligned and causally coherent video reasoning. The resulting model, Video R2, achieves consistently higher TAC, VAS, and accuracy across multiple benchmarks, demonstrating that improvements in temporal alignment and reasoning coherence lead to more accurate and trustworthy video understanding. Code: https://github.com/mbzuai-oryx/Video-R2

Muhammad Maaz, Hanoona Rasheed, Fahad Shahbaz Khan, Salman Khan• 2025

Related benchmarks

TaskDatasetResultRank
Video Question AnsweringVideoMME
Accuracy63.8
210
Video Question AnsweringVideoMMMU
Accuracy53.92
124
Video Question AnsweringLongVideoBench (val)
Accuracy59.2
55
Video Question AnsweringMMVU (val)
Accuracy67.4
15
Video Question AnsweringSciVideoBench
Accuracy28.4
13
Showing 5 of 5 rows

Other info

Follow for update