Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DPT: Deformable Patch-based Transformer for Visual Recognition

About

Transformer has achieved great success in computer vision, while how to split patches in an image remains a problem. Existing methods usually use a fixed-size patch embedding which might destroy the semantics of objects. To address this problem, we propose a new Deformable Patch (DePatch) module which learns to adaptively split the images into patches with different positions and scales in a data-driven way rather than using predefined fixed patches. In this way, our method can well preserve the semantics in patches. The DePatch module can work as a plug-and-play module, which can easily be incorporated into different transformers to achieve an end-to-end training. We term this DePatch-embedded transformer as Deformable Patch-based Transformer (DPT) and conduct extensive evaluations of DPT on image classification and object detection. Results show DPT can achieve 81.9% top-1 accuracy on ImageNet classification, and 43.7% box mAP with RetinaNet, 44.3% with Mask R-CNN on MSCOCO object detection. Code has been made available at: https://github.com/CASIA-IVA-Lab/DPT .

Zhiyang Chen, Yousong Zhu, Chaoyang Zhao, Guosheng Hu, Wei Zeng, Jinqiao Wang, Ming Tang• 2021

Related benchmarks

TaskDatasetResultRank
ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy (%)81.9
1155
Instance SegmentationCOCO 2017 (val)
APm0.41
1144
Image ClassificationImageNet 1k (test)
Top-1 Accuracy81.9
798
Object DetectionMS-COCO 2017 (val)
mAP43.7
237
Showing 4 of 4 rows

Other info

Code

Follow for update