Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MixFormer: Mixing Features across Windows and Dimensions

About

While local-window self-attention performs notably in vision tasks, it suffers from limited receptive field and weak modeling capability issues. This is mainly because it performs self-attention within non-overlapped windows and shares weights on the channel dimension. We propose MixFormer to find a solution. First, we combine local-window self-attention with depth-wise convolution in a parallel design, modeling cross-window connections to enlarge the receptive fields. Second, we propose bi-directional interactions across branches to provide complementary clues in the channel and spatial dimensions. These two designs are integrated to achieve efficient feature mixing among windows and dimensions. Our MixFormer provides competitive results on image classification with EfficientNet and shows better results than RegNet and Swin Transformer. Performance in downstream tasks outperforms its alternatives by significant margins with less computational costs in 5 dense prediction tasks on MS COCO, ADE20k, and LVIS. Code is available at \url{https://github.com/PaddlePaddle/PaddleClas}.

Qiang Chen, Qiman Wu, Jian Wang, Qinghao Hu, Tao Hu, Errui Ding, Jian Cheng, Jingdong Wang• 2022

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)--
2731
Image ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy83.8
1866
ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy (%)83.8
1155
Object DetectionCOCO (val)
mAP47.6
613
Instance SegmentationCOCO (val)
APmk44.9
472
Object DetectionCOCO 2017--
279
Instance SegmentationCOCO 2017
APm41.2
199
Instance SegmentationLVIS v1.0 (val)--
189
Keypoint DetectionCOCO (val)
AP75.3
60
Showing 9 of 9 rows

Other info

Code

Follow for update