Share your thoughts, 1 month free Claude Pro on us
See more
Home
/
Benchmarks
Zero-shot Language Modeling on Wikitext and LAMBADA
Loading...
25.46
Wikitext Perplexity
GPT-2 XLarge (Full model)
25.4576
25.4738
25.49
25.5062
Apr 12, 2024
Wikitext Perplexity
LAMBADA Perplexity
Average Perplexity
Updated 1mo ago
Evaluation Results
Method
Method
Links
Wikitext Perplexity
LAMBADA Perplexity
Average Perplexity
GPT-2 XLarge (Full model)
Model Layers=48, Train...
2024.04
25.46
20.24
22.85
Inheritune
Model Layers=24, Train...
2024.04
25.52
16.51
21.01
Feedback
Search any
task
Search any
task