Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Language Understanding on LLM Benchmark Suite (MMLU, ARC-C, PIQA, WinoG, GSM8K, HellaSwag, GPQA, RACE) (test)

57.93Overall Accuracy

Base

28.872436.416243.9651.5038Feb 19, 2026
Updated 1mo ago

Evaluation Results

MethodLinks
2026.02
57.9365.974374.169.369.2972.730.438.7
2026.02
57.6865.3343.0974.3769.5368.5871.9830.1238.4
2026.02
57.6265.4143.5274.9768.5968.1672.329.738.32
2026.02
57.4365.243.9475.368.5966.0371.9529.8538.55
2026.02
57.2365.1643.0974.4367.5667.1772.130.2538.1
2026.02
53.1862.1641.3873.1865.2755.8867.1827.9532.45
2026.02
52.761.4339.0872.6364.5657.0167.5227.1532.2
2026.02
52.3660.7939.5972.9565.8252.1167.3527.4832.82
2026.02
52.3460.9739.6872.264.6453.5366.927.733.1
2026.02
32.5728.620.9961.7550.041.5248.223.925.55
2026.02
32.5728.9321.0860.1251.071.948.723.5525.2
2026.02
30.9424.0118.7759.9649.171.5246.8523.124.1
2026.02
29.9924.7618.5256.6947.430.9945.2522.8523.45