Our new X account is live! Follow @wizwand_team for updates
Home
/
Benchmarks
Common-sense QA on OpenbookQA
Loading...
52.8
Accuracy
Gemma 7B
44.688
46.794
48.9
51.006
Aug 13, 2023
Oct 10, 2023
Dec 8, 2023
Feb 5, 2024
Apr 4, 2024
Jun 2, 2024
Jul 31, 2024
Accuracy
Perplexity
Updated 4d ago
Evaluation Results
Method
Method
Links
Accuracy
Perplexity
Gemma 7B
Parameters=7B
2024.07
52.8
-
Mixtral 8×22B
Parameters=8×22B
2024.07
50.8
-
OPT-2.7B FP16
Model=OPT-2.7B, Precis...
2023.08
49.6
26.16
Llama 3 405B
Parameters=405B
2024.07
49.2
-
Mistral 7B
Parameters=7B
2024.07
47.8
-
Llama 3 70B
Parameters=70B
2024.07
47.6
-
TSLD
Model=OPT-2.7B, Quanti...
2023.08
46.81
28.93
GT+Logit
Model=OPT-2.7B, Quanti...
2023.08
46.2
31.08
Logit
Model=OPT-2.7B, Quanti...
2023.08
45.4
29.41
Llama 3 8B
Parameters=8B
2024.07
45
-
Feedback
Search any
task
Search any
task