Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

RoBERTa: A Robustly Optimized BERT Pretraining Approach

About

Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code.

Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov• 2019

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy83.4
1891
Node ClassificationCora
Accuracy76.93
1215
Commonsense ReasoningWinoGrande
Accuracy79.3
1085
Node ClassificationCiteseer
Accuracy66.68
931
Node ClassificationPubmed
Accuracy42.32
819
Commonsense ReasoningPIQA
Accuracy79.4
751
Natural Language InferenceSNLI (test)
Accuracy91.83
690
Language ModelingWikiText-103 (test)
Perplexity21.6
579
Physical Commonsense ReasoningPIQA
Accuracy67.6
572
Natural Language UnderstandingGLUE
SST-296.4
531
Showing 10 of 1021 rows
...

Other info

Code

Follow for update