Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Aligning Pretraining for Detection via Object-Level Contrastive Learning

About

Image-level contrastive representation learning has proven to be highly effective as a generic model for transfer learning. Such generality for transfer learning, however, sacrifices specificity if we are interested in a certain downstream task. We argue that this could be sub-optimal and thus advocate a design principle which encourages alignment between the self-supervised pretext task and the downstream task. In this paper, we follow this principle with a pretraining method specifically designed for the task of object detection. We attain alignment in the following three aspects: 1) object-level representations are introduced via selective search bounding boxes as object proposals; 2) the pretraining network architecture incorporates the same dedicated modules used in the detection pipeline (e.g. FPN); 3) the pretraining is equipped with object detection properties such as object-level translation invariance and scale invariance. Our method, called Selective Object COntrastive learning (SoCo), achieves state-of-the-art results for transfer performance on COCO detection using a Mask R-CNN framework. Code is available at https://github.com/hologerry/SoCo.

Fangyun Wei, Yue Gao, Zhirong Wu, Han Hu, Stephen Lin• 2021

Related benchmarks

TaskDatasetResultRank
Object DetectionCOCO 2017 (val)--
2454
Instance SegmentationCOCO 2017 (val)--
1144
Semantic segmentationADE20K
mIoU37.8
936
Semantic segmentationCityscapes
mIoU76.5
578
Object DetectionLVIS v1.0 (val)
APbbox17.6
518
Instance SegmentationCOCO
APmask37.4
279
Object DetectionCOCO
AP50 (Box)61.9
190
Semantic segmentationPascal VOC
mIoU0.719
172
Semantic segmentationCOCO Stuff (val)
mIoU44.2
126
Object DetectionPASCAL VOC 2007+2012 (test)
mAP (mean Average Precision)59.1
95
Showing 10 of 16 rows

Other info

Follow for update