Share your thoughts, 1 month free Claude Pro on us
See more
Feedback
Search any
task
Search any
task
SOTA Masked Language Modeling benchmarks and papers with code | Wizwand
Share your thoughts, 1 month free Claude Pro on us
See more
Home
/
Tasks
Masked Language Modeling
Benchmarks
Dataset Name
SOTA Method
Dataset Name
SOTA Method
Metric
Trend
Results
Last Updated
C4 (val)
FLASH-Quad
PPLX
3.828
35
1mo ago
XSUM randomly sampled
MLM
U-PPL
3.8
20
1mo ago
SNLI (randomly sampled)
AG
PPL (U)
8.57
20
1mo ago
Wikipedia + BookCorpus (dev)
RealFormer
MLM Accuracy
74.76
12
1mo ago
Books, CC-News, Stories, Wikipedia (held-out set)
BIGBIRD-ETC
BPC
1.274
8
1mo ago
Turkish Datasets (blackerx/turkish_v2, fthbrmnby/turkish_product_reviews, hazal/Turkish-Biomedical-corpus-trM, newmindai/EuroHPC-Legal) (test)
boun-tabilab/TabiBERT
MLM Avg (%)
69.57
7
1mo ago
BERT Pretraining Corpus
gMLP_xlarge
Perplexity
2.89
7
1mo ago
BERT large
DynamiQ
vNMSE
0.0022
6
1mo ago
Ciao (test)
FT(BERT(T2), Manual)
Perplexity
5.813
6
1mo ago
ArXiv (test)
FT(BERT(T2), Manual)
Perplexity
3.499
6
1mo ago
Reddit (test)
FT(BERT(T2), Manual)
Perplexity
8.906
6
1mo ago
Masked LM
KnowBert-W+W
PPL
3.5
5
1mo ago
omg prot50 (val)
MUD1
Wall-clock Time (17.5 Target PPL)
78
4
1mo ago
6 languages Averaged (test)
NoOverlap
MRR
42.7
4
1mo ago
C4
Primer-EZ Decoder
Log Perplexity
1.787
4
1mo ago
20 languages
Unigram
MRR
52.6
3
1mo ago
GRCh37 human reference genome (held-out set)
BIGBIRD
BPC
1.12
3
1mo ago
BLLIP (test)
Transformer
Perplexity
101.91
2
1mo ago
PTB (test)
Transformer
Perplexity
58.43
2
1mo ago
Showing 19 of 19 rows
25 / page
50 / page
100 / page
1
Search any
task
Search any
task
Privacy Policy
Terms of Service
FAQs
Swarm Docs