Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Few-Shot Semantic Segmentation Meets SAM3

About

Few-Shot Semantic Segmentation (FSS) focuses on segmenting novel object categories from only a handful of annotated examples. Most existing approaches rely on extensive episodic training to learn transferable representations, which is both computationally demanding and sensitive to distribution shifts. In this work, we revisit FSS from the perspective of modern vision foundation models and explore the potential of Segment Anything Model 3 (SAM3) as a training-free solution. By repurposing its Promptable Concept Segmentation (PCS) capability, we adopt a simple spatial concatenation strategy that places support and query images into a shared canvas, allowing a fully frozen SAM3 to perform segmentation without any fine-tuning or architectural changes. Experiments on PASCAL-$5^i$ and COCO-$20^i$ show that this minimal design already achieves state-of-the-art performance, outperforming many heavily engineered methods. Beyond empirical gains, we uncover that negative prompts can be counterproductive in few-shot settings, where they often weaken target representations and lead to prediction collapse despite their intended role in suppressing distractors. These findings suggest that strong cross-image reasoning can emerge from simple spatial formulations, while also highlighting limitations in how current foundation models handle conflicting prompt signals. Code at: https://github.com/WongKinYiu/FSS-SAM3

Yi-Jen Tsai, Yen-Yu Lin, Chien-Yao Wang• 2026

Related benchmarks

TaskDatasetResultRank
Few-shot Semantic SegmentationCOCO-20i
mIoU75.8
178
Few-shot Semantic SegmentationPASCAL-5^i 1-shot
mIoU81.2
53
Few-shot Semantic SegmentationPASCAL-5i 5-shot VOC 2012 SDS
mIoU (5^0)76.2
15
Showing 3 of 3 rows

Other info

Follow for update