Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SAM4MLLM: Enhance Multi-Modal Large Language Model for Referring Expression Segmentation

About

We introduce SAM4MLLM, an innovative approach which integrates the Segment Anything Model (SAM) with Multi-Modal Large Language Models (MLLMs) for pixel-aware tasks. Our method enables MLLMs to learn pixel-level location information without requiring excessive modifications to the existing model architecture or adding specialized tokens. We introduce an inquiry-based approach that can effectively find prompt points for SAM to perform segmentation based on MLLM. It combines detailed visual information with the powerful expressive capabilities of large language models in a unified language-based manner without additional computational overhead in learning. Experimental results on pubic benchmarks demonstrate the effectiveness of our approach.

Yi-Chia Chen, Wei-Hua Li, Cheng Sun, Yu-Chiang Frank Wang, Chu-Song Chen• 2024

Related benchmarks

TaskDatasetResultRank
Referring Expression SegmentationRefCOCO (testA)
cIoU82.8
217
Referring Expression SegmentationRefCOCO+ (val)
cIoU74.6
201
Referring Image SegmentationRefCOCO+ (test-B)
mIoU67.2
200
Referring Image SegmentationRefCOCO (val)
mIoU79.8
197
Referring Expression SegmentationRefCOCO (testB)
cIoU76.1
191
Referring Expression SegmentationRefCOCO+ (testA)
cIoU80
190
Referring Expression SegmentationRefCOCO (val)
cIoU79.8
190
Referring Expression SegmentationRefCOCO+ (testB)
cIoU67.2
188
Referring Image SegmentationRefCOCO (test A)
mIoU82.7
178
Reasoning SegmentationReasonSeg (val)
cIoU60.4
145
Showing 10 of 37 rows

Other info

Follow for update