Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Instance Localization for Self-supervised Detection Pretraining

About

Prior research on self-supervised learning has led to considerable progress on image classification, but often with degraded transfer performance on object detection. The objective of this paper is to advance self-supervised pretrained models specifically for object detection. Based on the inherent difference between classification and detection, we propose a new self-supervised pretext task, called instance localization. Image instances are pasted at various locations and scales onto background images. The pretext task is to predict the instance category given the composited images as well as the foreground bounding boxes. We show that integration of bounding boxes into pretraining promotes better task alignment and architecture alignment for transfer learning. In addition, we propose an augmentation method on the bounding boxes to further enhance the feature alignment. As a result, our model becomes weaker at Imagenet semantic classification but stronger at image patch localization, with an overall stronger pretrained model for object detection. Experimental results demonstrate that our approach yields state-of-the-art transfer learning results for object detection on PASCAL VOC and MSCOCO.

Ceyuan Yang, Zhirong Wu, Bolei Zhou, Stephen Lin• 2021

Related benchmarks

TaskDatasetResultRank
Object DetectionCOCO 2017 (val)--
2454
Instance SegmentationCOCO 2017 (val)--
1144
Semantic segmentationADE20K
mIoU37.3
936
Semantic segmentationCityscapes
mIoU75.4
578
Instance SegmentationCOCO
APmask36.8
279
Image ClassificationCUB
Accuracy75.83
249
Object DetectionCOCO
AP50 (Box)60.9
190
Semantic segmentationPascal VOC
mIoU0.729
172
Fine-grained visual classificationNABirds (test)
Top-1 Accuracy76.36
157
Object DetectionPASCAL VOC 2007+2012 (test)
mAP (mean Average Precision)58.4
95
Showing 10 of 21 rows

Other info

Follow for update