IBISAgent: Reinforcing Pixel-Level Visual Reasoning in MLLMs for Universal Biomedical Object Referring and Segmentation
About
Recent research on medical MLLMs has gradually shifted its focus from image-level understanding to fine-grained, pixel-level comprehension. Although segmentation serves as the foundation for pixel-level understanding, existing approaches face two major challenges. First, they introduce implicit segmentation tokens and require simultaneous fine-tuning of both the MLLM and external pixel decoders, which increases the risk of catastrophic forgetting and limits generalization to out-of-domain scenarios. Second, most methods rely on single-pass reasoning and lack the capability to iteratively refine segmentation results, leading to suboptimal performance. To overcome these limitations, we propose a novel agentic MLLM, named IBISAgent, that reformulates segmentation as a vision-centric, multi-step decision-making process. IBISAgent enables MLLMs to generate interleaved reasoning and text-based click actions, invoke segmentation tools, and produce high-quality masks without architectural modifications. By iteratively performing multi-step visual reasoning on masked image features, IBISAgent naturally supports mask refinement and promotes the development of pixel-level visual reasoning capabilities. We further design a two-stage training framework consisting of cold-start supervised fine-tuning and agentic reinforcement learning with tailored, fine-grained rewards, enhancing the model's robustness in complex medical referring and reasoning segmentation tasks. Extensive experiments demonstrate that IBISAgent consistently outperforms both closed-source and open-source SOTA methods. All datasets, code, and trained models will be released publicly.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Medical Visual Question Answering | Slake | Accuracy83.5 | 134 | |
| Medical Visual Question Answering | VQA-RAD | Accuracy73.4 | 106 | |
| Medical Visual Question Answering | PathVQA | Overall Accuracy69.2 | 86 | |
| Interactive Segmentation | In-domain (test) | IoU86.37 | 14 | |
| Segmentation | BiomedParseData official (Dtest) | IoU85.58 | 13 | |
| Segmentation | MeCOVQA-G+ (test) | IOU80.63 | 13 | |
| Segmentation | Held-out in-house (test) | IOU72.09 | 13 | |
| Medical Image Segmentation | Inhouse (test) | -- | 9 | |
| Interactive Segmentation | MeCOVQA-G+ | IoU81.56 | 7 | |
| Medical Image Segmentation | MeCOVQA-G+ out-of-domain (test) | IoU80.63 | 6 |