Our new X account is live! Follow @wizwand_team for updates
Home
/
Benchmarks
Long-context Language Modeling on LongPPL 32k
Loading...
4.14
Book Perplexity
Engram-27B
4.1304
4.1952
4.26
4.3248
Jan 12, 2026
Book Perplexity
Paper Perplexity
Code Perplexity
L-CoT Perplexity
Updated 4d ago
Evaluation Results
Method
Method
Links
Book Perplexity
Paper Perplexity
Code Perplexity
L-CoT Perplexity
Engram-27B
Pre-training Steps=50k...
2026.01
4.14
2.82
2.44
13.41
Engram-27B
Pre-training Steps=46k...
2026.01
4.19
2.84
2.45
13.59
Engram-27B
Pre-training Steps=41k...
2026.01
4.37
2.92
2.5
14.26
MoE-27B
Pre-training Steps=50k...
2026.01
4.38
2.91
2.49
14.16
Feedback
Search any
task
Search any
task