Share your thoughts, 1 month free Claude Pro on us
See more
Home
/
Benchmarks
Language Modeling on C4 (LLaMA Model Perplexity Evaluation)
Loading...
7.26
Perplexity (LLaMA-2 7B/8B)
Full Precision
4.3228
24.1489
43.975
63.8011
Dec 25, 2025
Perplexity (LLaMA-2 7B/8B)
Perplexity (LLaMA-2 13B)
Perplexity (LLaMA-3 8B)
Updated 1mo ago
Evaluation Results
Method
Method
Links
Perplexity (LLaMA-2 7B/8B)
Perplexity (LLaMA-2 13B)
Perplexity (LLaMA-3 8B)
Full Precision
Weight Bits=16
2025.12
7.26
6.73
9.45
Ours
Output Alignment (OA)=...
2025.12
19.25
13.8
35.14
ARB-RC
Weight Alignment (WA)=...
2025.12
20.4
14.77
36.04
ARB-X
Output Alignment (OA)=...
2025.12
28.02
19.82
41.86
BiLLM
Weight Alignment (WA)=...
2025.12
39.38
25.87
61.04
PB-LLM
Weight Alignment (WA)=...
2025.12
80.69
184.67
104.15
Feedback
Search any
task
Search any
task