Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

How Do Vision Transformers Work?

About

The success of multi-head self-attentions (MSAs) for computer vision is now indisputable. However, little is known about how MSAs work. We present fundamental explanations to help better understand the nature of MSAs. In particular, we demonstrate the following properties of MSAs and Vision Transformers (ViTs): (1) MSAs improve not only accuracy but also generalization by flattening the loss landscapes. Such improvement is primarily attributable to their data specificity, not long-range dependency. On the other hand, ViTs suffer from non-convex losses. Large datasets and loss landscape smoothing methods alleviate this problem; (2) MSAs and Convs exhibit opposite behaviors. For example, MSAs are low-pass filters, but Convs are high-pass filters. Therefore, MSAs and Convs are complementary; (3) Multi-stage neural networks behave like a series connection of small individual models. In addition, MSAs at the end of a stage play a key role in prediction. Based on these insights, we propose AlterNet, a model in which Conv blocks at the end of a stage are replaced with MSA blocks. AlterNet outperforms CNNs not only in large data regimes but also in small data regimes. The code is available at https://github.com/xxxnell/how-do-vits-work.

Namuk Park, Songkuk Kim• 2022

Related benchmarks

TaskDatasetResultRank
Graph RegressionZINC (test)
MAE0.059
204
Graph RegressionPeptides struct LRGB (test)
MAE0.246
178
Graph ClassificationCIFAR10 (test)
Test Accuracy76.468
139
Graph ClassificationPeptides-func LRGB (test)
AP0.6988
136
Graph ClassificationMNIST (test)
Accuracy98.108
110
Graph Pattern RecognitionPATTERN (test)
Weighted Accuracy87.196
12
Graph ClusteringCLUSTER (test)
W. Accuracy80.026
10
Showing 7 of 7 rows

Other info

Follow for update