Our new X account is live! Follow @wizwand_team for updates
Home
/
Benchmarks
Language Modeling Evaluation on ARC, HellaSwag, MMLU, TruthfulQA, WinoGrande
Loading...
34.64
ARC Accuracy
BOFT
28.9616
30.4358
31.91
33.3842
Jan 27, 2026
ARC Accuracy
HellaSwag Accuracy
MMLU Accuracy
TruthfulQA Accuracy
WinoGrande Accuracy
Average Score
Updated 4d ago
Evaluation Results
Method
Method
Links
ARC Accuracy
HellaSwag Accuracy
MMLU Accuracy
TruthfulQA Accuracy
WinoGrande Accuracy
Average Score
BOFT
Ave. Mem.=145.1%, Max....
2026.01
34.64
51.7
58.18
39.57
56.43
48.1
QLoRA
Ave. Mem.=51.7%, Max....
2026.01
34.64
50.1
58.05
40.41
55.09
47.66
QLoRA w/ TOKENSEEK
Ave. Mem.=19.2%, Max....
2026.01
34.56
50.09
57.52
41.51
58.56
48.45
RanLoRA
Ave. Mem.=95.4%, Max....
2026.01
29.18
50.1
58.33
45.21
57.22
48.01
Feedback
Search any
task
Search any
task