Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

IBISAgent: Reinforcing Pixel-Level Visual Reasoning in MLLMs for Universal Biomedical Object Referring and Segmentation

About

Recent research on medical MLLMs has gradually shifted its focus from image-level understanding to fine-grained, pixel-level comprehension. Although segmentation serves as the foundation for pixel-level understanding, existing approaches face two major challenges. First, they introduce implicit segmentation tokens and require simultaneous fine-tuning of both the MLLM and external pixel decoders, which increases the risk of catastrophic forgetting and limits generalization to out-of-domain scenarios. Second, most methods rely on single-pass reasoning and lack the capability to iteratively refine segmentation results, leading to suboptimal performance. To overcome these limitations, we propose a novel agentic MLLM, named IBISAgent, that reformulates segmentation as a vision-centric, multi-step decision-making process. IBISAgent enables MLLMs to generate interleaved reasoning and text-based click actions, invoke segmentation tools, and produce high-quality masks without architectural modifications. By iteratively performing multi-step visual reasoning on masked image features, IBISAgent naturally supports mask refinement and promotes the development of pixel-level visual reasoning capabilities. We further design a two-stage training framework consisting of cold-start supervised fine-tuning and agentic reinforcement learning with tailored, fine-grained rewards, enhancing the model's robustness in complex medical referring and reasoning segmentation tasks. Extensive experiments demonstrate that IBISAgent consistently outperforms both closed-source and open-source SOTA methods. All datasets, code, and trained models will be released publicly.

Yankai Jiang, Qiaoru Li, Binlu Xu, Haoran Sun, Chao Ding, Junting Dong, Yuxiang Cai, Xuhong Zhang, Jianwei Yin• 2026

Related benchmarks

TaskDatasetResultRank
Medical Visual Question AnsweringSlake
Accuracy83.5
134
Medical Visual Question AnsweringVQA-RAD
Accuracy73.4
106
Medical Visual Question AnsweringPathVQA
Overall Accuracy69.2
86
Interactive SegmentationIn-domain (test)
IoU86.37
14
SegmentationBiomedParseData official (Dtest)
IoU85.58
13
SegmentationMeCOVQA-G+ (test)
IOU80.63
13
SegmentationHeld-out in-house (test)
IOU72.09
13
Medical Image SegmentationInhouse (test)--
9
Interactive SegmentationMeCOVQA-G+
IoU81.56
7
Medical Image SegmentationMeCOVQA-G+ out-of-domain (test)
IoU80.63
6
Showing 10 of 10 rows

Other info

Follow for update