Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model

About

Segment Anything Model (SAM) has attracted widespread attention for its superior interactive segmentation capabilities with visual prompts while lacking further exploration of text prompts. In this paper, we empirically investigate what text prompt encoders (e.g., CLIP or LLM) are good for adapting SAM for referring expression segmentation and introduce the Early Vision-language Fusion-based SAM (EVF-SAM). EVF-SAM is a simple yet effective referring segmentation method which exploits multimodal prompts (i.e., image and text) and comprises a pre-trained vision-language model to generate referring prompts and a SAM model for segmentation. Surprisingly, we observe that: (1) multimodal prompts and (2) vision-language models with early fusion (e.g., BEIT-3) are beneficial for prompting SAM for accurate referring segmentation. Our experiments show that the proposed EVF-SAM based on BEIT-3 can obtain state-of-the-art performance on RefCOCO/+/g for referring expression segmentation and demonstrate the superiority of prompting SAM with early vision-language fusion. In addition, the proposed EVF-SAM with 1.32B parameters achieves remarkably higher performance while reducing nearly 82% of parameters compared to previous SAM methods based on large multimodal models.

Yuxuan Zhang, Tianheng Cheng, Lianghui Zhu, Rui Hu, Lei Liu, Heng Liu, Longjin Ran, Xiaoxin Chen, Wenyu Liu, Xinggang Wang• 2024

Related benchmarks

TaskDatasetResultRank
Referring Expression SegmentationRefCOCO (testA)
cIoU84.2
217
Referring Expression SegmentationRefCOCO+ (val)
cIoU76.5
201
Referring Expression SegmentationRefCOCO (testB)
cIoU80.2
191
Referring Expression SegmentationRefCOCO (val)
cIoU82.4
190
Referring Expression SegmentationRefCOCO+ (testA)
cIoU80
190
Referring Expression SegmentationRefCOCO+ (testB)
cIoU71.9
188
Visual GroundingRefCOCO+ (val)--
171
Visual GroundingRefCOCO+ (testB)--
169
Visual GroundingRefCOCO+ (testA)
Accuracy75.5
168
Reasoning SegmentationReasonSeg (val)
cIoU55.7
145
Showing 10 of 24 rows

Other info

Follow for update