Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Cloze-driven Pretraining of Self-attention Networks

About

We present a new approach for pretraining a bi-directional transformer model that provides significant performance gains across a variety of language understanding problems. Our model solves a cloze-style word reconstruction task, where each word is ablated and must be predicted given the rest of the text. Experiments demonstrate large performance gains on GLUE and new state of the art results on NER as well as constituency parsing benchmarks, consistent with the concurrently introduced BERT model. We also present a detailed analysis of a number of factors that contribute to effective pretraining, including data domain and size, model capacity, and variations on the cloze objective.

Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, Michael Auli• 2019

Related benchmarks

TaskDatasetResultRank
Named Entity RecognitionCoNLL 2003 (test)
F1 Score93.5
539
Natural Language UnderstandingGLUE (test)
SST-2 Accuracy94.6
416
Named Entity RecognitionCoNLL English 2003 (test)
F1 Score93.5
135
Named Entity RecognitionCoNLL 03
F1 (Entity)93.5
102
Constituency ParsingWSJ Penn Treebank (test)
F1 Score95.6
27
Named Entity RecognitionCoNLL English 2003 (dev)
F1 Score96.9
26
Named Entity RecognitionCoNLL English 2003
F1 Score93.5
19
Constituency ParsingPenn Treebank (dev)
F1 Score95.5
3
Showing 8 of 8 rows

Other info

Follow for update