Share your thoughts, 1 month free Claude Pro on us
See more
Home
/
Benchmarks
Few-shot Language Evaluation on ARC, HellaSwag, MMLU, TruthfulQA, and WinoGrande
Loading...
56.06
ARC Accuracy
QLoRA
51.536
52.7105
53.885
55.0595
Jan 27, 2026
ARC Accuracy
HellaSwag Accuracy
MMLU Accuracy
TruthfulQA Accuracy
WinoGrande Accuracy
Average Score
Updated 1mo ago
Evaluation Results
Method
Method
Links
ARC Accuracy
HellaSwag Accuracy
MMLU Accuracy
TruthfulQA Accuracy
WinoGrande Accuracy
Average Score
QLoRA
Backbone=Llama2 (7B),...
2026.01
56.06
78.6
65.08
43.64
69.38
62.55
TOKENSEEK
Backbone=Llama2 (7B),...
2026.01
53.5
78.82
65.26
44.62
68.51
62.14
TOKENTUNE
Backbone=Llama2 (7B),...
2026.01
53.16
78.76
63.64
39.58
69.22
60.87
Full Parameter Tuning
Backbone=Llama2 (7B),...
2026.01
52.39
78.97
64.44
38.97
68.9
60.73
TOKENSEEK
Backbone=Llama2 (7B),...
2026.01
52.22
78.96
65.28
39.95
68.43
60.97
TOKENTUNE
Backbone=Llama2 (7B),...
2026.01
51.71
78.35
61.56
41.88
70.01
60.7
Feedback
Search any
task
Search any
task