Share your thoughts, 1 month free Claude Pro on us
See more
Home
/
Benchmarks
Language Modeling on PTB (LLaMA Model Evaluation)
Loading...
37.91
Perplexity (LLaMA-2 7/8B)
Full Precision
-170.294
1,235.083
2,640.46
4,045.837
Dec 25, 2025
Perplexity (LLaMA-2 7/8B)
Perplexity (LLaMA-2 13B)
Perplexity (LLaMA-3 8B)
Updated 1mo ago
Evaluation Results
Method
Method
Links
Perplexity (LLaMA-2 7/8B)
Perplexity (LLaMA-2 13B)
Perplexity (LLaMA-3 8B)
Full Precision
Weight Bits=16
2025.12
37.91
50.93
11.18
PB-LLM
Weight Alignment (WA)=...
2025.12
657.24
816.31
106.25
ARB-X
Output Alignment (OA)=...
2025.12
681.24
182.1
53.86
ARB-RC
Weight Alignment (WA)=...
2025.12
763.19
197.7
47.88
Ours
Output Alignment (OA)=...
2025.12
3,166
196.64
45.66
BiLLM
Weight Alignment (WA)=...
2025.12
5,243.01
309.12
87.25
Feedback
Search any
task
Search any
task