Our new X account is live! Follow @wizwand_team for updates
Home
/
Benchmarks
Language Modeling on LLaMA-2-7B
Loading...
5.47
Perplexity
BF16
-23.4372
171.6864
366.81
561.9336
Feb 16, 2024
Jun 15, 2024
Oct 13, 2024
Feb 10, 2025
Jun 10, 2025
Oct 8, 2025
Feb 6, 2026
Perplexity
Updated 4d ago
Evaluation Results
Method
Method
Links
Perplexity
BF16
Quantization Bits=None...
2024.02
5.47
AQLM + PV
Variant=2-bit-1x16, Mo...
2026.02
6.08
AQLM + PV
Variant=2-bit-2x8, Mod...
2026.02
6.27
QTIP
Variant=2-bit, Model S...
2026.02
6.29
AQLM
Variant=2-bit-1x16, Mo...
2026.02
6.34
AQLM
Variant=2-bit-2x8, Mod...
2026.02
7.24
NANOQUANT
Variant=2-bit, Model S...
2026.02
7.35
BitDistiller
Quantization Bits=2, G...
2024.02
8.08
QuIP#
Quantization Bits=2, G...
2024.02
8.97
NANOQUANT
Variant=1-bit, Model S...
2026.02
9.01
QuIP
Quantization Bits=2, G...
2024.02
728.15
Feedback
Search any
task
Search any
task