Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SAM-REF: Introducing Image-Prompt Synergy during Interaction for Detail Enhancement in the Segment Anything Model

About

Interactive segmentation is to segment the mask of the target object according to the user's interactive prompts. There are two mainstream strategies: early fusion and late fusion. Current specialist models utilize the early fusion strategy that encodes the combination of images and prompts to target the prompted objects, yet repetitive complex computations on the images result in high latency. Late fusion models extract image embeddings once and merge them with the prompts in later interactions. This strategy avoids redundant image feature extraction and improves efficiency significantly. A recent milestone is the Segment Anything Model (SAM). However, this strategy limits the models' ability to extract detailed information from the prompted target zone. To address this issue, we propose SAM-REF, a two-stage refinement framework that fully integrates images and prompts by using a lightweight refiner into the interaction of late fusion, which combines the accuracy of early fusion and maintains the efficiency of late fusion. Through extensive experiments, we show that our SAM-REF model outperforms the current state-of-the-art method in most metrics on segmentation quality without compromising efficiency.

Chongkai Yu, Ting Liu, Anqi Li, Xiaochao Qu, Chengjing Wu, Luoqi Liu, Xiaolin Hu• 2024

Related benchmarks

TaskDatasetResultRank
Interactive SegmentationBerkeley
NoC@901.43
230
Interactive Image SegmentationGrabCut
NoC@901.36
28
Interactive Image SegmentationDAVIS
NoC @ 90% IoU4.54
27
Interactive Image SegmentationSBD
NoC904.44
16
Interactive Image SegmentationHQSeg-44K (val)
5-mIoU90.75
12
Showing 5 of 5 rows

Other info

Follow for update