Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SegCLIP: Patch Aggregation with Learnable Centers for Open-Vocabulary Semantic Segmentation

About

Recently, the contrastive language-image pre-training, e.g., CLIP, has demonstrated promising results on various downstream tasks. The pre-trained model can capture enriched visual concepts for images by learning from a large scale of text-image data. However, transferring the learned visual knowledge to open-vocabulary semantic segmentation is still under-explored. In this paper, we propose a CLIP-based model named SegCLIP for the topic of open-vocabulary segmentation in an annotation-free manner. The SegCLIP achieves segmentation based on ViT and the main idea is to gather patches with learnable centers to semantic regions through training on text-image pairs. The gathering operation can dynamically capture the semantic groups, which can be used to generate the final segmentation results. We further propose a reconstruction loss on masked patches and a superpixel-based KL loss with pseudo-labels to enhance the visual representation. Experimental results show that our model achieves comparable or superior segmentation accuracy on the PASCAL VOC 2012 (+0.3% mIoU), PASCAL Context (+2.3% mIoU), and COCO (+2.2% mIoU) compared with baselines. We release the code at https://github.com/ArrowLuo/SegCLIP.

Huaishao Luo, Junwei Bao, Youzheng Wu, Xiaodong He, Tianrui Li• 2022

Related benchmarks

TaskDatasetResultRank
Semantic segmentationPASCAL VOC 2012 (val)
Mean IoU52.6
2040
Semantic segmentationADE20K
mIoU8.7
936
Semantic segmentationPASCAL Context (val)
mIoU24.7
323
Semantic segmentationPascal VOC (test)
mIoU52.6
236
Semantic segmentationPascal Context (test)
mIoU24.7
176
Semantic segmentationPascal VOC
mIoU0.526
172
Semantic segmentationPascal Context
mIoU24.7
111
Semantic segmentationCOCO
mIoU26.5
96
Semantic segmentationPascal Context 60
mIoU24.7
81
Semantic segmentationCOCO Object (val)
mIoU0.265
77
Showing 10 of 45 rows

Other info

Code

Follow for update