Do Instance Priors Help Weakly Supervised Semantic Segmentation?
About
Semantic segmentation requires dense pixel-level annotations, which are costly and time-consuming to acquire. To address this, we present SeSAM, a framework that uses a foundational segmentation model, i.e. Segment Anything Model (SAM), with weak labels, including coarse masks, scribbles, and points. SAM, originally designed for instance-based segmentation, cannot be directly used for semantic segmentation tasks. In this work, we identify specific challenges faced by SAM and determine appropriate components to adapt it for class-based segmentation using weak labels. Specifically, SeSAM decomposes class masks into connected components, samples point prompts along object skeletons, selects SAM masks using weak-label coverage, and iteratively refines labels using pseudo-labels, enabling SAM-generated masks to be effectively used for semantic segmentation. Integrated with a semi-supervised learning framework, SeSAM balances ground-truth labels, SAM-based pseudo-labels, and high-confidence pseudo-labels, significantly improving segmentation quality. Extensive experiments across multiple benchmarks and weak annotation types show that SeSAM consistently outperforms weakly supervised baselines while substantially reducing annotation cost relative to fine supervision.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Semantic segmentation | ADE20K | mIoU43.5 | 366 | |
| Semantic segmentation | Cityscapes | mIoU75.1 | 218 | |
| Semantic segmentation | PASCAL VOC 2012 | mIoU71.4 | 218 | |
| Semantic Segmentation (Scribble-supervised) | PASCAL VOC 2012 (val) | mIoU78.1 | 11 | |
| Semantic Segmentation (Point-supervised) | PASCAL VOC 2012 (val) | mIoU75.3 | 9 |