PyVision-RL: Forging Open Agentic Vision Models via RL
About
Reinforcement learning for agentic multimodal models often suffers from interaction collapse, where models learn to reduce tool usage and multi-turn reasoning, limiting the benefits of agentic behavior. We introduce PyVision-RL, a reinforcement learning framework for open-weight multimodal models that stabilizes training and sustains interaction. Our approach combines an oversampling-filtering-ranking rollout strategy with an accumulative tool reward to prevent collapse and encourage multi-turn tool use. Using a unified training pipeline, we develop PyVision-Image and PyVision-Video for image and video understanding. For video reasoning, PyVision-Video employs on-demand context construction, selectively sampling task-relevant frames during reasoning to significantly reduce visual token usage. Experiments show strong performance and improved efficiency, demonstrating that sustained interaction and on-demand visual processing are critical for scalable multimodal agents.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multimodal Reasoning | WeMath | Accuracy47.7 | 43 | |
| Multimodal Reasoning | DynaMath | Accuracy61.6 | 24 | |
| Visual Search | HR-Bench-4K | Accuracy78.1 | 23 | |
| Visual Search | HR-Bench-8K | Accuracy74.3 | 23 | |
| Multimodal Reasoning | MathVision | -- | 23 | |
| Multimodal Reasoning | MathVerse | Accuracy55.8 | 20 | |
| Spatial Reasoning | VSI-Bench (test) | Avg Score44 | 4 | |
| Agentic Reasoning | TIR-Bench | Accuracy19.8 | 3 |