Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GraphThinker: Reinforcing Video Reasoning with Event Graph Thinking

About

Video reasoning requires understanding the causal relationships between events in a video. However, such relationships are often implicit and costly to annotate manually. While existing multimodal large language models (MLLMs) often infer event relations through dense captions or video summaries for video reasoning, such modeling still lacks causal understanding. Without explicit causal structure modeling within and across video events, these models suffer from hallucinations during the video reasoning. In this work, we propose GraphThinker, a reinforcement finetuning-based method that constructs structural event-level scene graphs and enhances visual grounding to jointly reduce hallucinations in video reasoning. Specifically, we first employ an MLLM to construct an event-based video scene graph (EVSG) that explicitly models both intra- and inter-event relations, and incorporate these formed scene graphs into the MLLM as an intermediate thinking process. We also introduce a visual attention reward during reinforcement finetuning, which strengthens video grounding and further mitigates hallucinations. We evaluate GraphThinker on two datasets, RexTime and VidHalluc, where it shows superior ability to capture object and event relations with more precise event localization, reducing hallucinations in video reasoning compared to prior methods.

Zixu Cheng, Da Li, Jian Hu, Yuhang Zang, Ziquan Liu, Shaogang Gong, Wei Li• 2026

Related benchmarks

TaskDatasetResultRank
Moment LocalizationRexTime 1.0 (test)
mIoU41.46
17
VQARexTime 1.0 (test)
Accuracy0.713
15
Video ReasoningVidHalluc (test)
Binary QA Accuracy (ACH)66.04
13
Showing 3 of 3 rows

Other info

Follow for update