CP2: Copy-Paste Contrastive Pretraining for Semantic Segmentation
About
Recent advances in self-supervised contrastive learning yield good image-level representation, which favors classification tasks but usually neglects pixel-level detailed information, leading to unsatisfactory transfer performance to dense prediction tasks such as semantic segmentation. In this work, we propose a pixel-wise contrastive learning method called CP2 (Copy-Paste Contrastive Pretraining), which facilitates both image- and pixel-level representation learning and therefore is more suitable for downstream dense prediction tasks. In detail, we copy-paste a random crop from an image (the foreground) onto different background images and pretrain a semantic segmentation model with the objective of 1) distinguishing the foreground pixels from the background pixels, and 2) identifying the composed images that share the same foreground.Experiments show the strong performance of CP2 in downstream semantic segmentation: By finetuning CP2 pretrained models on PASCAL VOC 2012, we obtain 78.6% mIoU with a ResNet-50 and 79.5% with a ViT-S.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | ImageNet (val) | -- | 1206 | |
| Video Object Segmentation | DAVIS 2017 (val) | J mean51.3 | 1130 | |
| Semantic segmentation | ADE20K | mIoU25.4 | 936 | |
| Semantic segmentation | PASCAL VOC (val) | mIoU65.2 | 338 | |
| Semantic segmentation | COCO Stuff (val) | mIoU46.5 | 126 | |
| Semantic segmentation | COCO Object (val) | mIoU0.594 | 77 | |
| Semantic segmentation | VOC 2012 (val) | mIoU63.1 | 67 | |
| Unsupervised Semantic Segmentation | PASCAL VOC 2012 (val) | mIoU9.5 | 15 | |
| Unsupervised Segmentation | COCO-Things (val) | mIoU12.9 | 13 | |
| Unsupervised Segmentation | COCO Stuff (val) | mIoU13.6 | 13 |