Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Lessons from the Trenches on Reproducible Evaluation of Language Models

About

Effective evaluation of language models remains an open challenge in NLP. Researchers and engineers face methodological issues such as the sensitivity of models to evaluation setup, difficulty of proper comparisons across methods, and the lack of reproducibility and transparency. In this paper we draw on three years of experience in evaluating large language models to provide guidance and lessons for researchers. First, we provide an overview of common challenges faced in language model evaluation. Second, we delineate best practices for addressing or lessening the impact of these challenges on research. Third, we present the Language Model Evaluation Harness (lm-eval): an open source library for independent, reproducible, and extensible evaluation of language models that seeks to address these issues. We describe the features of the library as well as case studies in which the library has been used to alleviate these methodological concerns.

Stella Biderman, Hailey Schoelkopf, Lintang Sutawika, Leo Gao, Jonathan Tow, Baber Abbasi, Alham Fikri Aji, Pawan Sasanka Ammanamanchi, Sidney Black, Jordan Clive, Anthony DiPofi, Julen Etxaniz, Benjamin Fattori, Jessica Zosa Forde, Charles Foster, Jeffrey Hsu, Mimansa Jaiswal, Wilson Y. Lee, Haonan Li, Charles Lovering, Niklas Muennighoff, Ellie Pavlick, Jason Phang, Aviya Skowron, Samson Tan, Xiangru Tang, Kevin A. Wang, Genta Indra Winata, Fran\c{c}ois Yvon, Andy Zou• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningWinoGrande
Accuracy62.9
1085
Question AnsweringARC-E
Accuracy69.6
416
Question AnsweringPIQA
Accuracy76.6
374
Question AnsweringSciQ--
283
Sentence CompletionHellaSwag
Accuracy48.1
276
Language ModelingLambada OpenAI
Accuracy67.2
127
Reading ComprehensionRACE
Accuracy36.9
70
Question AnsweringARC-C
Accuracy (ARC-C)35.6
46
Mean Performance EvaluationDownstream Tasks Summary
Average Accuracy60.1
36
Language ModelingLambada Standard
Accuracy55.9
36
Showing 10 of 10 rows

Other info

Follow for update