Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GenSeg-R1: RL-Driven Vision-Language Grounding for Fine-Grained Referring Segmentation

About

We study fine-grained referring image segmentation via a decoupled reason-then-segment pipeline. A vision-language model (VLM) receives an image and a natural-language query, reasons about the scene, and emits structured spatial prompts: a bounding box plus two interior keypoints for every referred instance. A frozen promptable segmenter (SAM 2) converts these prompts into high-quality masks. Within our GenSeg-R1 framework we finetune Qwen3-VL models (4B and 8B parameters) using Group Relative Policy Optimization (GRPO), requiring no supervised reasoning-chain annotations. On RefCOCOg validation our best model (GenSeg-R1-8B) achieves 0.7127 cIoU and 0.7382 mIoU, substantially outperforming the corresponding Qwen3-VL Instruct baselines (+15.3 and +21.9 points, respectively) and surpassing Seg-Zero-7B [3] by +3.3 cIoU under identical evaluation. We further introduce GenSeg-R1-G, a variant trained on GRefCOCO [9] with a SAM 2 in-the-loop reward that directly optimizes mask quality. On GRefCOCO validation GenSeg-R1-G achieves 76.69% target mIoU with 82.40% accuracy on negative (no-target) prompts, substantially outperforming Seg-R1-7B and Seg-Zero-7B, which lack no-target detection capability. On ReasonSeg test, GenSeg-R1-4B reaches 68.40% mIoU, surpassing Seg-Zero-7B by +7.0 and Seg-R1-7B by +10.7 points.

Sandesh Hegde, Jaison Saji Chacko, Debarshi Banerjee, Uma Mahesh• 2026

Related benchmarks

TaskDatasetResultRank
Reasoning SegmentationReasonSeg (test)
gIoU68.4
102
Generalized Referring Expression SegmentationgRefCOCO v1 (val)
cIoU76.83
33
Bounding box detectionRefCOCOg (val)
mIoU (bbox)81.01
8
Showing 3 of 3 rows

Other info

Follow for update