Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

metabench -- A Sparse Benchmark of Reasoning and Knowledge in Large Language Models

About

Large Language Models (LLMs) vary in their abilities on a range of tasks. Initiatives such as the Open LLM Leaderboard aim to quantify these differences with several large benchmarks (sets of test items to which an LLM can respond either correctly or incorrectly). However, high correlations within and between benchmark scores suggest that (1) there exists a small set of common underlying abilities that these benchmarks measure, and (2) items tap into redundant information and the benchmarks may thus be considerably compressed. We use data from n > 5000 LLMs to identify the most informative items of six benchmarks, ARC, GSM8K, HellaSwag, MMLU, TruthfulQA and WinoGrande (with d = 28,632 items in total). From them we distill a sparse benchmark, metabench, that has less than 3% of the original size of all six benchmarks combined. This new sparse benchmark goes beyond point scores by yielding estimators of the underlying benchmark-specific abilities. We show that these estimators (1) can be used to reconstruct each original individual benchmark score with, on average, 1.24% root mean square error (RMSE), (2) reconstruct the original total score with 0.58% RMSE, and (3) have a single underlying common factor whose Spearman correlation with the total score is r = 0.94.

Alex Kipnis, Konstantinos Voudouris, Luca M. Schulze Buschoff, Eric Schulz• 2024

Related benchmarks

TaskDatasetResultRank
Model Ranking PredictionHelpsteer 70B+ Models Holdout (test)
Pairwise Acc (RM1)56.4
4
Model Ranking PredictionHelpsteer 13B+ Models Holdout (test)
Acc_pair (RM1 Helpful)58.4
4
Pairwise Preference RankingHelpsteer 2% holdout (test)
Pairwise Acc (RM1)72.7
4
Pairwise Preference RankingHelpsteer 5% holdout (test)
Pairwise Accuracy (RM1-Helpful)70.8
4
Pairwise Preference RankingUltraFeedback 2% holdout (test)
Pairwise Acc (RM1-Honest)74.6
4
Pairwise Preference RankingUltraFeedback 5% holdout (test)
Pairwise Accuracy (RM1-Honest)71.8
4
Model Ranking PredictionHelpsteer 30B+ Models Holdout (test)
Pairwise Accuracy (RM1)60.6
4
Model Ranking PredictionUltraFeedback 70B+ Models Holdout (test)
Pairwise Acc (RM1_Honest)58.1
4
Model Ranking PredictionUltraFeedback 13B+ Models Holdout (test)
Pairwise Accuracy (RM1_Honest)58.6
4
Pairwise Preference RankingHelpsteer 10% holdout (test)
Pairwise Acc (RM1-Helpful)67.4
4
Showing 10 of 12 rows

Other info

Follow for update