Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Open-Vocabulary Universal Image Segmentation with MaskCLIP

About

In this paper, we tackle an emerging computer vision task, open-vocabulary universal image segmentation, that aims to perform semantic/instance/panoptic segmentation (background semantic labeling + foreground instance segmentation) for arbitrary categories of text-based descriptions in inference time. We first build a baseline method by directly adopting pre-trained CLIP models without finetuning or distillation. We then develop MaskCLIP, a Transformer-based approach with a MaskCLIP Visual Encoder, which is an encoder-only module that seamlessly integrates mask tokens with a pre-trained ViT CLIP model for semantic/instance segmentation and class prediction. MaskCLIP learns to efficiently and effectively utilize pre-trained partial/dense CLIP features within the MaskCLIP Visual Encoder that avoids the time-consuming student-teacher training process. MaskCLIP outperforms previous methods for semantic/instance/panoptic segmentation on ADE20K and PASCAL datasets. We show qualitative illustrations for MaskCLIP with online custom categories. Project website: https://maskclip.github.io.

Zheng Ding, Jieke Wang, Zhuowen Tu• 2022

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K
mIoU23.7
936
Semantic segmentationCityscapes
mIoU17.7
578
Semantic segmentationCOCO Stuff
mIoU8.8
195
Semantic segmentationADE20K A-150
mIoU23.7
188
Semantic segmentationPascal VOC
mIoU0.388
172
Semantic segmentationPascal Context 59
mIoU45.9
164
Object DetectionLVIS (val)
mAP8.4
141
Semantic segmentationPASCAL-Context 59 class (val)
mIoU45.9
125
Semantic segmentationPascal VOC 20
mIoU41.7
105
Panoptic SegmentationADE20K (val)
PQ15.121
89
Showing 10 of 62 rows

Other info

Code

Follow for update