Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Extract Free Dense Labels from CLIP

About

Contrastive Language-Image Pre-training (CLIP) has made a remarkable breakthrough in open-vocabulary zero-shot image recognition. Many recent studies leverage the pre-trained CLIP models for image-level classification and manipulation. In this paper, we wish examine the intrinsic potential of CLIP for pixel-level dense prediction, specifically in semantic segmentation. To this end, with minimal modification, we show that MaskCLIP yields compelling segmentation results on open concepts across various datasets in the absence of annotations and fine-tuning. By adding pseudo labeling and self-training, MaskCLIP+ surpasses SOTA transductive zero-shot semantic segmentation methods by large margins, e.g., mIoUs of unseen classes on PASCAL VOC/PASCAL Context/COCO Stuff are improved from 35.6/20.7/30.3 to 86.1/66.7/54.7. We also test the robustness of MaskCLIP under input corruption and evaluate its capability in discriminating fine-grained objects and novel concepts. Our finding suggests that MaskCLIP can serve as a new reliable source of supervision for dense prediction tasks to achieve annotation-free segmentation. Source code is available at https://github.com/chongzhou96/MaskCLIP.

Chong Zhou, Chen Change Loy, Bo Dai• 2021

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU12.2
2888
Semantic segmentationPASCAL VOC 2012 (val)
Mean IoU29.3
2142
Semantic segmentationADE20K
mIoU12.3
1024
Semantic segmentationCityscapes
mIoU25.6
658
Semantic segmentationCityscapes (val)
mIoU25.2
572
Semantic segmentationCOCO Stuff
mIoU2.39e+3
379
Semantic segmentationCityscapes (val)
mIoU12.6
374
Semantic segmentationADE20K
mIoU11.9
366
Semantic segmentationPASCAL VOC (val)
mIoU70
362
Semantic segmentationPASCAL Context (val)
mIoU48.2
360
Showing 10 of 197 rows
...

Other info

Follow for update