Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

tinyBenchmarks: evaluating LLMs with fewer examples

About

The versatility of large language models (LLMs) led to the creation of diverse benchmarks that thoroughly test a variety of language models' abilities. These benchmarks consist of tens of thousands of examples making evaluation of LLMs very expensive. In this paper, we investigate strategies to reduce the number of evaluations needed to assess the performance of an LLM on several key benchmarks. For example, we show that to accurately estimate the performance of an LLM on MMLU, a popular multiple-choice QA benchmark consisting of 14K examples, it is sufficient to evaluate this LLM on 100 curated examples. We release evaluation tools and tiny versions of popular benchmarks: Open LLM Leaderboard, MMLU, HELM, and AlpacaEval 2.0. Our empirical analysis demonstrates that these tools and tiny benchmarks are sufficient to reliably and efficiently reproduce the original evaluation results.

Felipe Maia Polo, Lucas Weber, Leshem Choshen, Yuekai Sun, Gongjun Xu, Mikhail Yurochkin• 2024

Related benchmarks

TaskDatasetResultRank
Model Performance PredictionDeepSeek Model Families (Hold-out)
MAE1.595
45
Benchmark CompressionMMLU_Pro (test)
Spearman Rho0.92
20
Benchmark Compression (Coreset selection)SEED-Bench-2-Plus (full)
rho0.863
20
Benchmark Compression (Coreset selection)BBH (full)
rho0.901
20
Benchmark CompressionARC Challenge (test)
Spearman Rho0.884
20
LLM Performance EstimationGSM8K (test)
MAE (%)2.424
20
LLM Performance EstimationWinoGrande (test)
MAE1.957
20
Benchmark Compression (Coreset selection)GSM8K (full)
rho0.896
20
LLM Performance EstimationARC (test)
MAE (%)2.274
20
LLM Performance EstimationHELLASWAG (test)
MAE (%)1.75
20
Showing 10 of 25 rows

Other info

Follow for update