Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

HRFormer: High-Resolution Transformer for Dense Prediction

About

We present a High-Resolution Transformer (HRFormer) that learns high-resolution representations for dense prediction tasks, in contrast to the original Vision Transformer that produces low-resolution representations and has high memory and computational cost. We take advantage of the multi-resolution parallel design introduced in high-resolution convolutional networks (HRNet), along with local-window self-attention that performs self-attention over small non-overlapping image windows, for improving the memory and computation efficiency. In addition, we introduce a convolution into the FFN to exchange information across the disconnected image windows. We demonstrate the effectiveness of the High-Resolution Transformer on both human pose estimation and semantic segmentation tasks, e.g., HRFormer outperforms Swin transformer by $1.3$ AP on COCO pose estimation with $50\%$ fewer parameters and $30\%$ fewer FLOPs. Code is available at: https://github.com/HRNet/HRFormer.

Yuhui Yuan, Rao Fu, Lang Huang, Weihong Lin, Chao Zhang, Xilin Chen, Jingdong Wang• 2021

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU50
2731
Semantic segmentationADE20K--
936
Image ClassificationImageNet-1k (val)
Top-1 Acc82.8
706
Semantic segmentationCityscapes--
578
Human Pose EstimationCOCO (test-dev)
AP76.2
408
2D Human Pose EstimationCOCO 2017 (val)
AP77.2
386
Pose EstimationCOCO (val)
AP77.2
319
Semantic segmentationCityscapes (val)
mIoU83.2
287
Semantic segmentationCOCO Stuff--
195
Human Pose EstimationCOCO 2017 (test-dev)
AP76.2
180
Showing 10 of 35 rows

Other info

Code

Follow for update