Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SAM4MLLM: Enhance Multi-Modal Large Language Model for Referring Expression Segmentation

About

We introduce SAM4MLLM, an innovative approach which integrates the Segment Anything Model (SAM) with Multi-Modal Large Language Models (MLLMs) for pixel-aware tasks. Our method enables MLLMs to learn pixel-level location information without requiring excessive modifications to the existing model architecture or adding specialized tokens. We introduce an inquiry-based approach that can effectively find prompt points for SAM to perform segmentation based on MLLM. It combines detailed visual information with the powerful expressive capabilities of large language models in a unified language-based manner without additional computational overhead in learning. Experimental results on pubic benchmarks demonstrate the effectiveness of our approach.

Yi-Chia Chen, Wei-Hua Li, Cheng Sun, Yu-Chiang Frank Wang, Chu-Song Chen• 2024

Related benchmarks

TaskDatasetResultRank
Referring Image SegmentationRefCOCO (val)
mIoU79.8
259
Referring Expression SegmentationRefCOCO (testA)
cIoU82.8
257
Referring Image SegmentationRefCOCO+ (test-B)
mIoU67.2
252
Referring Expression SegmentationRefCOCO+ (testA)
cIoU80
230
Referring Image SegmentationRefCOCO (test A)
mIoU82.7
230
Referring Expression SegmentationRefCOCO+ (val)
cIoU74.6
223
Referring Expression SegmentationRefCOCO (testB)
cIoU76.1
213
Referring Expression SegmentationRefCOCO (val)
cIoU79.8
212
Referring Expression SegmentationRefCOCO+ (testB)
cIoU67.2
210
Reasoning SegmentationReasonSeg (val)
gIoU58.4
193
Showing 10 of 39 rows

Other info

Follow for update