Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Deformable DETR: Deformable Transformers for End-to-End Object Detection

About

DETR has been recently proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance. However, it suffers from slow convergence and limited feature spatial resolution, due to the limitation of Transformer attention modules in processing image feature maps. To mitigate these issues, we proposed Deformable DETR, whose attention modules only attend to a small set of key sampling points around a reference. Deformable DETR can achieve better performance than DETR (especially on small objects) with 10 times less training epochs. Extensive experiments on the COCO benchmark demonstrate the effectiveness of our approach. Code is released at https://github.com/fundamentalvision/Deformable-DETR.

Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai• 2020

Related benchmarks

TaskDatasetResultRank
Object DetectionCOCO 2017 (val)
AP51.4
2454
Object DetectionCOCO (test-dev)
mAP56.6
1195
Instance SegmentationCOCO 2017 (val)--
1144
Object DetectionMS COCO (test-dev)
mAP@.571.9
677
Object DetectionCOCO (val)
mAP49.8
613
Object DetectionLVIS v1.0 (val)
APbbox32.5
518
Object DetectionCOCO v2017 (test-dev)
mAP52.3
499
Oriented Object DetectionDOTA v1.0 (test)
SV72.53
378
Video Object DetectionImageNet VID (val)
mAP (%)55.4
341
Object DetectionMS-COCO 2017 (val)--
237
Showing 10 of 134 rows
...

Other info

Code

Follow for update