Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Aligning Pretraining for Detection via Object-Level Contrastive Learning

About

Image-level contrastive representation learning has proven to be highly effective as a generic model for transfer learning. Such generality for transfer learning, however, sacrifices specificity if we are interested in a certain downstream task. We argue that this could be sub-optimal and thus advocate a design principle which encourages alignment between the self-supervised pretext task and the downstream task. In this paper, we follow this principle with a pretraining method specifically designed for the task of object detection. We attain alignment in the following three aspects: 1) object-level representations are introduced via selective search bounding boxes as object proposals; 2) the pretraining network architecture incorporates the same dedicated modules used in the detection pipeline (e.g. FPN); 3) the pretraining is equipped with object detection properties such as object-level translation invariance and scale invariance. Our method, called Selective Object COntrastive learning (SoCo), achieves state-of-the-art results for transfer performance on COCO detection using a Mask R-CNN framework. Code is available at https://github.com/hologerry/SoCo.

Fangyun Wei, Yue Gao, Zhirong Wu, Han Hu, Stephen Lin• 2021

Related benchmarks

TaskDatasetResultRank
Object DetectionCOCO 2017 (val)--
2643
Instance SegmentationCOCO 2017 (val)--
1201
Semantic segmentationADE20K
mIoU37.8
1024
Semantic segmentationCityscapes
mIoU76.5
658
Object DetectionLVIS v1.0 (val)
APbbox17.6
529
Instance SegmentationCOCO
APmask37.4
291
Object DetectionCOCO
AP50 (Box)61.9
237
Semantic segmentationPascal VOC
mIoU0.719
180
Semantic segmentationCOCO Stuff (val)
mIoU44.2
126
Semantic segmentationCOCO Object (val)
mIoU0.568
97
Showing 10 of 16 rows

Other info

Follow for update