Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Positional Label for Self-Supervised Vision Transformer

About

Positional encoding is important for vision transformer (ViT) to capture the spatial structure of the input image. General effectiveness has been proven in ViT. In our work we propose to train ViT to recognize the positional label of patches of the input image, this apparently simple task actually yields a meaningful self-supervisory task. Based on previous work on ViT positional encoding, we propose two positional labels dedicated to 2D images including absolute position and relative position. Our positional labels can be easily plugged into various current ViT variants. It can work in two ways: (a) As an auxiliary training target for vanilla ViT (e.g., ViT-B and Swin-B) for better performance. (b) Combine the self-supervised ViT (e.g., MAE) to provide a more powerful self-supervised signal for semantic feature learning. Experiments demonstrate that with the proposed self-supervised methods, ViT-B and Swin-B gain improvements of 1.20% (top-1 Acc) and 0.74% (top-1 Acc) on ImageNet, respectively, and 6.15% and 1.14% improvement on Mini-ImageNet.

Zhemin Zhang, Xun Gong• 2022

Related benchmarks

TaskDatasetResultRank
Medical Image SegmentationMM-WHS (test)
Dice Score85.52
62
Multi-organ SegmentationBTCV (test)
Spl94.35
55
Liver SegmentationLiTS
Dice Score94.13
29
Medical Image SegmentationMSD Spleen (test)
Dice Score94.16
18
Brain Tumor SegmentationBraTS 21
Dice TC81.35
14
ClassificationCC-CCII 68
Accuracy87.54
12
Showing 6 of 6 rows

Other info

Follow for update