Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Discrete Representations Strengthen Vision Transformer Robustness

About

Vision Transformer (ViT) is emerging as the state-of-the-art architecture for image recognition. While recent studies suggest that ViTs are more robust than their convolutional counterparts, our experiments find that ViTs trained on ImageNet are overly reliant on local textures and fail to make adequate use of shape information. ViTs thus have difficulties generalizing to out-of-distribution, real-world data. To address this deficiency, we present a simple and effective architecture modification to ViT's input layer by adding discrete tokens produced by a vector-quantized encoder. Different from the standard continuous pixel tokens, discrete tokens are invariant under small perturbations and contain less information individually, which promote ViTs to learn global information that is invariant. Experimental results demonstrate that adding discrete representation on four architecture variants strengthens ViT robustness by up to 12% across seven ImageNet robustness benchmarks while maintaining the performance on ImageNet.

Chengzhi Mao, Lu Jiang, Mostafa Dehghani, Carl Vondrick, Rahul Sukthankar, Irfan Essa• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet A
Top-1 Acc52.64
553
Image ClassificationImageNet V2
Top-1 Acc75.55
487
Image ClassificationImageNet-R
Top-1 Acc44.77
474
Image ClassificationImageNet-Sketch
Top-1 Accuracy44.72
360
Image ClassificationImageNet
Top-1 Accuracy81.83
324
Image ClassificationImageNet
Accuracy85.07
184
Image ClassificationObjectNet
Top-1 Accuracy46.62
177
Image ClassificationImageNet-C (test)
mCE (Mean Corruption Error)38.74
110
Image ClassificationImageNet-R (test)--
105
Image ClassificationImageNet-C
mCE38.74
103
Showing 10 of 17 rows

Other info

Follow for update