Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

VisionCoach: Reinforcing Grounded Video Reasoning via Visual-Perception Prompting

About

Video reasoning requires models to locate and track question-relevant evidence across frames. While reinforcement learning (RL) with verifiable rewards improves accuracy, it still struggles to achieve reliable spatio-temporal grounding during the reasoning process. Moreover, improving grounding typically relies on scaled training data or inference-time perception tools, which increases annotation cost or computational cost. To address this challenge, we propose VisonCoach, an input-adaptive RL framework that improves spatio-temporal grounding through visual prompting as training-time guidance. During RL training, visual prompts are selectively applied to challenging inputs to amplify question-relevant evidence and suppress distractors. The model then internalizes these improvements through self-distillation, enabling grounded reasoning directly on raw videos without visual prompting at inference. VisonCoach consists of two components: (1) Visual Prompt Selector, which predicts appropriate prompt types conditioned on the video and question, and (2) Spatio-Temporal Reasoner, optimized with RL under visual prompt guidance and object-aware grounding rewards that enforce object identity consistency and multi-region bounding-box overlap. Extensive experiments demonstrate that VisonCoach achieves state-of-the-art performance under comparable settings, across diverse video reasoning, video understanding, and temporal grounding benchmarks (V-STAR, VideoMME, World-Sense, VideoMMMU, PerceptionTest, and Charades-STA), while maintaining a single efficient inference pathway without external tools. Our results show that visual prompting during training improves grounded video reasoning, while self-distillation enables the model to internalize this ability without requiring prompts at inference time.

Daeun Lee, Shoubin Yu, Yue Zhang, Mohit Bansal• 2026

Related benchmarks

TaskDatasetResultRank
Video UnderstandingVideoMME
Score (Long)53.2
248
Temporal GroundingCharades-STA
R@0.545.8
88
Video UnderstandingVideoMMMU--
32
Video UnderstandingWorldSense
Score43.8
25
Video UnderstandingPerceptionTest
Overall Score68.7
5
Showing 5 of 5 rows

Other info

GitHub

Follow for update