Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Do Transformer Modifications Transfer Across Implementations and Applications?

About

The research community has proposed copious modifications to the Transformer architecture since it was introduced over three years ago, relatively few of which have seen widespread adoption. In this paper, we comprehensively evaluate many of these modifications in a shared experimental setting that covers most of the common uses of the Transformer in natural language processing. Surprisingly, we find that most modifications do not meaningfully improve performance. Furthermore, most of the Transformer variants we found beneficial were either developed in the same codebase that we used or are relatively minor changes. We conjecture that performance improvements may strongly depend on implementation details and correspondingly make some recommendations for improving the generality of experimental results.

Sharan Narang, Hyung Won Chung, Yi Tay, William Fedus, Thibault Fevry, Michael Matena, Karishma Malkan, Noah Fiedel, Noam Shazeer, Zhenzhong Lan, Yanqi Zhou, Wei Li, Nan Ding, Jake Marcus, Adam Roberts, Colin Raffel• 2021

Related benchmarks

TaskDatasetResultRank
Language ModelingC4 (test)
Perplexity12.69
268
Natural Language UnderstandingSuperGLUE
SGLUE Score75.65
84
Language ModelingC4 T5 (val)
PPLX19.28
20
Abstractive SummarizationXsum
XSum Score17.9
14
Language ModelingPG19 T5 (val)
PPLX15.64
10
Question AnsweringWebQuestions
WebQ Accuracy25.92
4
Masked Language ModelingC4
Log Perplexity1.792
4
Showing 7 of 7 rows

Other info

Follow for update