Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Pay Attention to MLPs

About

Transformers have become one of the most important architectural innovations in deep learning and have enabled many breakthroughs over the past few years. Here we propose a simple network architecture, gMLP, based on MLPs with gating, and show that it can perform as well as Transformers in key language and vision applications. Our comparisons show that self-attention is not critical for Vision Transformers, as gMLP can achieve the same accuracy. For BERT, our model achieves parity with Transformers on pretraining perplexity and is better on some downstream NLP tasks. On finetuning tasks where gMLP performs worse, making the gMLP model substantially larger can close the gap with Transformers. In general, our experiments show that gMLP can scale as well as Transformers over increased data and compute.

Hanxiao Liu, Zihang Dai, David R. So, Quoc V. Le• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy81.6
1866
ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy (%)81.6
1155
Image ClassificationImageNet-1k (val)
Top-1 Accuracy81.6
840
Language ModelingWikiText-103 (test)
Perplexity29.13
524
Question AnsweringSQuAD v1.1 (dev)
F1 Score92.2
375
Image ClassificationImageNet-1k (val)
Top-1 Acc81.6
287
Image ClassificationPACS (test)
Average Accuracy84.23
254
Language ModelingWikiText-103 (val)
PPL28.08
180
Question AnsweringSQuAD v2.0 (dev)
F185.4
158
Image ClassificationImageNet-1k (val)
Top-1 Accuracy81.6
91
Showing 10 of 21 rows

Other info

Follow for update