Share your thoughts, 1 month free Claude Pro on us
See more
Home
/
Benchmarks
Language Modeling on Language Modeling Dataset PPL Llama-2-70B
Loading...
3.32
Perplexity
Dense
3.2504
3.7202
4.19
4.6598
May 23, 2025
Perplexity
Updated 3d ago
Evaluation Results
Method
Method
Links
Perplexity
Dense
Pruning Ratio=0%
2025.05
3.32
TRSP-ℓ2
Pruning Ratio=25%
2025.05
4.13
TRSP-ℓ1
Pruning Ratio=25%
2025.05
4.28
ShortGPT
Pruning Ratio=25%
2025.05
4.85
LaCo
Pruning Ratio=25%
2025.05
4.92
Shortened LLaMA
Pruning Ratio=25%
2025.05
4.98
SLEB
Pruning Ratio=25%
2025.05
5.06
Feedback
Search any
task
Search any
task