Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Conditional Positional Encodings for Vision Transformers

About

We propose a conditional positional encoding (CPE) scheme for vision Transformers. Unlike previous fixed or learnable positional encodings, which are pre-defined and independent of input tokens, CPE is dynamically generated and conditioned on the local neighborhood of the input tokens. As a result, CPE can easily generalize to the input sequences that are longer than what the model has ever seen during training. Besides, CPE can keep the desired translation-invariance in the image classification task, resulting in improved performance. We implement CPE with a simple Position Encoding Generator (PEG) to get seamlessly incorporated into the current Transformer framework. Built on PEG, we present Conditional Position encoding Vision Transformer (CPVT). We demonstrate that CPVT has visually similar attention maps compared to those with learned positional encodings and delivers outperforming results. Our code is available at https://github.com/Meituan-AutoML/CPVT .

Xiangxiang Chu, Zhi Tian, Bo Zhang, Xinlong Wang, Chunhua Shen• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy74.9
1866
ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy (%)82.3
1155
Image ClassificationImageNet-1K
Top-1 Acc83.6
836
Image ClassificationImageNet 1k (test)
Top-1 Accuracy82.3
798
Image ClassificationImageNet-1k (val)
Top-1 Acc81.5
706
Image ClassificationCIFAR-100
Accuracy75.1
302
Image ClassificationImageNet (val)--
300
Image ClassificationImageNet-1K 1 (val)
Top-1 Accuracy81.5
119
Image ClassificationImageNet-1k (val)
Top-1 Accuracy81.5
91
Image ClassificationImageNet (val)
Top-1 Acc81.5
37
Showing 10 of 16 rows

Other info

Follow for update