Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

IDGen: Item Discrimination Induced Prompt Generation for LLM Evaluation

About

As Large Language Models (LLMs) grow increasingly adept at managing complex tasks, the evaluation set must keep pace with these advancements to ensure it remains sufficiently discriminative. Item Discrimination (ID) theory, which is widely used in educational assessment, measures the ability of individual test items to differentiate between high and low performers. Inspired by this theory, we propose an ID-induced prompt synthesis framework for evaluating LLMs to ensure the evaluation set can continually update and refine according to model abilities. Our data synthesis framework prioritizes both breadth and specificity. It can generate prompts that comprehensively evaluate the capabilities of LLMs while revealing meaningful performance differences between models, allowing for effective discrimination of their relative strengths and weaknesses across various tasks and domains. To produce high-quality data, we incorporate a self-correct mechanism into our generalization framework, and develop two models to predict prompt discrimination and difficulty score to facilitate our data synthesis framework, contributing valuable tools to evaluation data synthesis research. We apply our generated data to evaluate five SOTA models. Our data achieves an average score of 51.92, accompanied by a variance of 10.06. By contrast, previous works (i.e., SELF-INSTRUCT and WizardLM) obtain an average score exceeding 67, with a variance below 3.2. The results demonstrate that the data generated by our framework is more challenging and discriminative compared to previous works. We will release a dataset of over 3,000 carefully crafted prompts to facilitate evaluation research of LLMs.

Fan Lin, Shuyi Xie, Yong Dai, Wenlin Yao, Tianjiao Lang, Zishan Xu, Zhichao Hu, Xiao Xiao, Yuhong Liu, Yu Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Instruction Dataset Quality EvaluationPublic Instruction Tuning Datasets (test)
Discrimination Index20.4
6
Instruction Following EvaluationWizardLM--
5
Instruction Following EvaluationInstruction Tuning with GPT-4--
5
Instruction Following EvaluationSELF-INSTRUCT seed data--
5
Instruction Following EvaluationSelf-Instruct--
5
Instruction Following EvaluationSELF-INSTRUCT Ours--
5
Instruction Following EvaluationOurs hard seed data--
5
Showing 7 of 7 rows

Other info

Code

Follow for update