Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Training Multi-Image Vision Agents via End2End Reinforcement Learning

About

Recent VLM-based agents aim to replicate OpenAI O3's ``thinking with images" via tool use, but most open-source methods limit input to a single image, falling short on real-world multi-image QA tasks. To address this, we propose IMAgent, an open-source vision agent trained via end-to-end reinforcement learning dedicated for complex multi-image tasks. By leveraging a multi-agent system, we generate challenging and visually-rich multi-image QA pairs to fully activate the tool-use potential of the base VLM. Through manual verification, we obtain MIFG-QA, comprising 10k samples for training and evaluation. With deeper reasoning steps, VLMs may increasingly ignore visual inputs. We therefore develop two specialized tools for visual reflection and confirmation, allowing the model to proactively reallocate its attention to image content during inference. Benefiting from our well-designed action-trajectory two-level mask strategy, IMAgent achieves stable tool use behavior via pure RL training without requiring costly supervised fine-tuning data. Extensive experiments demonstrate that IMAgent maintains strong performance on existing single-image benchmarks while achieving substantial improvements on our proposed multi-image dataset, with our analysis providing actionable insights for the research community. Codes and data will be released soon.

Chengqi Dong, Chuhuai Yue, Hang He, Rongge Mao, Fenghe Tang, S Kevin Zhou, Zekun Xu, Xiaohan Wang, Jiajun Chai, Wei Lin, Guojun Yin• 2025

Related benchmarks

TaskDatasetResultRank
Multimodal UnderstandingMME
MME Score64.83
158
Visual SearchV*
Average Success88.48
11
High-Resolution Visual ReasoningHR-Bench
Score (4K)73.5
8
Multi-Image Fine-Grained Question AnsweringMIFG-QA
Accuracy (Nature)50.24
7
Showing 4 of 4 rows

Other info

Follow for update