Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Tarot-SAM3: Training-free SAM3 for Any Referring Expression Segmentation

About

Referring Expression Segmentation (RES) aims to segment image regions described by natural-language expressions, serving as a bridge between vision and language understanding. Existing RES methods, however, rely heavily on large annotated datasets and are limited to either explicit or implicit expressions, hindering their ability to generalize to any referring expression. Recently, the Segment Anything Model 3 (SAM3) has shown impressive robustness in Promptable Concept Segmentation. Nonetheless, applying it to RES remains challenging: (1) SAM3 struggles with longer or implicit expressions; (2) naive coupling of SAM3 with a multimodal large language model (MLLM) makes the final results overly dependent on the MLLM's reasoning capability, without enabling refinement of SAM3's segmentation outputs. To this end, we present Tarot-SAM3, a novel training-free framework that can accurately segment from any referring expression. Specifically, Tarot-SAM3 consists of two key phases. First, the Expression Reasoning Interpreter (ERI) phase introduces reasoning-assisted prompt options to support structured expression parsing and evaluation-aware rephrasing. This transforms arbitrary queries into robust heterogeneous prompts for generating reliable masks with SAM3. Second, the Mask Self-Refining (MSR) phase selects the best mask across prompt types and performs self-refinement by leveraging rich feature relationships from DINOv3 to compare discriminative regions among ERI outputs. It then infers region affiliation to the target, thereby correcting over- and under-segmentation. Extensive experiments demonstrate that Tarot-SAM3 achieves strong performance on both explicit and implicit RES benchmarks, as well as open-world scenarios. Ablation studies further validate the effectiveness of each phase.

Weiming Zhang, Dingwen Xiao, Songyue Guo, Guangyu Xiang, Shiqi Wen, Minwei Zhao, Lei Chen, Lin Wang• 2026

Related benchmarks

TaskDatasetResultRank
Referring Expression SegmentationRefCOCO (testA)--
257
Referring Expression SegmentationRefCOCO+ (testA)--
230
Referring Expression SegmentationRefCOCO+ (val)--
223
Referring Expression SegmentationRefCOCO (testB)--
213
Referring Expression SegmentationRefCOCO (val)--
212
Referring Expression SegmentationRefCOCO+ (testB)--
210
Referring Expression SegmentationRefCOCOg Google (val)
gIoU67.2
15
Showing 7 of 7 rows

Other info

Follow for update