ReWatch-R1: Boosting Complex Video Reasoning in Large Vision-Language Models through Agentic Data Synthesis
About
While Reinforcement Learning with Verifiable Reward (RLVR) significantly advances image reasoning in Large Vision-Language Models (LVLMs), its application to complex video reasoning remains underdeveloped. This gap stems primarily from a critical data bottleneck: existing datasets lack the challenging, multi-hop questions and high-quality, video-grounded Chain-of-Thought (CoT) data necessary to effectively bootstrap RLVR. To address this, we introduce ReWatch, a large-scale dataset built to foster advanced video reasoning. We propose a novel multi-stage synthesis pipeline to synthesize its three components: ReWatch-Caption, ReWatch-QA, and ReWatch-CoT. A core innovation is our Multi-Agent ReAct framework for CoT synthesis, which simulates a human-like "re-watching" process to generate video-grounded reasoning traces by explicitly modeling information retrieval and verification. Building on this dataset, we develop ReWatch-R1 by post-training a strong baseline LVLM with Supervised Fine-Tuning (SFT) and our RLVR framework. This framework incorporates a novel Observation \& Reasoning (O\&R) reward mechanism that evaluates both the final answer's correctness and the reasoning's alignment with video content, directly penalizing hallucination. Our experiments show that ReWatch-R1 achieves state-of-the-art average performance on five challenging video reasoning benchmarks. Project Page: https://rewatch-r1.github.io
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Video Question Answering | VideoMME | -- | 99 | |
| Video Question Answering | VideoMMMU | Accuracy51.9 | 52 | |
| Video Question Answering | LVBench | Overall Score43.3 | 32 | |
| Temporal Grounding | SurgVidLM Out-of-domain (test) | R@0.369.96 | 13 | |
| Temporal Grounding | OphVL In-domain (test) | R@0.345.85 | 13 | |
| Grounded VQA | SurgVidLM Out-of-domain (test) | mIoU25.47 | 13 | |
| Grounded VQA | OphVL In-domain (test) | mIoU36.2 | 13 | |
| Grounded VQA | MedVideoCap In-domain (test) | mIoU35.25 | 13 | |
| Video Question Answering | Video-Holmes | Average Score44.3 | 6 |