UP-DETR: Unsupervised Pre-training for Object Detection with Transformers
About
DEtection TRansformer (DETR) for object detection reaches competitive performance compared with Faster R-CNN via a transformer encoder-decoder architecture. However, trained with scratch transformers, DETR needs large-scale training data and an extreme long training schedule even on COCO dataset. Inspired by the great success of pre-training transformers in natural language processing, we propose a novel pretext task named random query patch detection in Unsupervised Pre-training DETR (UP-DETR). Specifically, we randomly crop patches from the given image and then feed them as queries to the decoder. The model is pre-trained to detect these query patches from the input image. During the pre-training, we address two critical issues: multi-task learning and multi-query localization. (1) To trade off classification and localization preferences in the pretext task, we find that freezing the CNN backbone is the prerequisite for the success of pre-training transformers. (2) To perform multi-query localization, we develop UP-DETR with multi-query patch detection with attention mask. Besides, UP-DETR also provides a unified perspective for fine-tuning object detection and one-shot detection tasks. In our experiments, UP-DETR significantly boosts the performance of DETR with faster convergence and higher average precision on object detection, one-shot detection and panoptic segmentation. Code and pre-training models: https://github.com/dddzg/up-detr.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Object Detection | COCO 2017 (val) | AP42.8 | 2454 | |
| Object Detection | MS-COCO (val) | mAP0.428 | 138 | |
| Object Detection | PASCAL VOC 2007 (test) | AP57.2 | 18 | |
| Class-agnostic Object Detection | MS-COCO 2017 (val) | AP (Overall)0.001 | 15 | |
| Object Detection | MS-COCO In-Domain (val) | D-ECE25.5 | 6 | |
| Object Detection | CorCOCO Out-Domain (val) | D-ECE27.5 | 6 |