Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PyVision-RL: Forging Open Agentic Vision Models via RL

About

Reinforcement learning for agentic multimodal models often suffers from interaction collapse, where models learn to reduce tool usage and multi-turn reasoning, limiting the benefits of agentic behavior. We introduce PyVision-RL, a reinforcement learning framework for open-weight multimodal models that stabilizes training and sustains interaction. Our approach combines an oversampling-filtering-ranking rollout strategy with an accumulative tool reward to prevent collapse and encourage multi-turn tool use. Using a unified training pipeline, we develop PyVision-Image and PyVision-Video for image and video understanding. For video reasoning, PyVision-Video employs on-demand context construction, selectively sampling task-relevant frames during reasoning to significantly reduce visual token usage. Experiments show strong performance and improved efficiency, demonstrating that sustained interaction and on-demand visual processing are critical for scalable multimodal agents.

Shitian Zhao, Shaoheng Lin, Ming Li, Haoquan Zhang, Wenshuo Peng, Kaipeng Zhang, Chen Wei• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal ReasoningWeMath
Accuracy47.7
43
Multimodal ReasoningDynaMath
Accuracy61.6
24
Visual SearchHR-Bench-4K
Accuracy78.1
23
Visual SearchHR-Bench-8K
Accuracy74.3
23
Multimodal ReasoningMathVision--
23
Multimodal ReasoningMathVerse
Accuracy55.8
20
Spatial ReasoningVSI-Bench (test)
Avg Score44
4
Agentic ReasoningTIR-Bench
Accuracy19.8
3
Showing 8 of 8 rows

Other info

GitHub

Follow for update