Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Inference-Time Decontamination: Reusing Leaked Benchmarks for Large Language Model Evaluation

About

The training process of large language models (LLMs) often involves varying degrees of test data contamination. Although current LLMs are achieving increasingly better performance on various benchmarks, their performance in practical applications does not always match their benchmark results. Leakage of benchmarks can prevent the accurate assessment of LLMs' true performance. However, constructing new benchmarks is costly, labor-intensive and still carries the risk of leakage. Therefore, in this paper, we ask the question, Can we reuse these leaked benchmarks for LLM evaluation? We propose Inference-Time Decontamination (ITD) to address this issue by detecting and rewriting leaked samples without altering their difficulties. ITD can mitigate performance inflation caused by memorizing leaked benchmarks. Our proof-of-concept experiments demonstrate that ITD reduces inflated accuracy by 22.9% on GSM8K and 19.0% on MMLU. On MMLU, using Inference-time Decontamination can lead to a decrease in the results of Phi3 and Mistral by 6.7% and 3.6% respectively. We hope that ITD can provide more truthful evaluation results for large language models.

Qin Zhu, Qingyuan Cheng, Runyu Peng, Xiaonan Li, Tengxiao Liu, Ru Peng, Xipeng Qiu, Xuanjing Huang• 2024

Related benchmarks

TaskDatasetResultRank
Language UnderstandingMMLU o=1 Exact split
Accuracy77.6
42
Multitask Language UnderstandingMMLU Exact split, o=3
Accuracy89.8
42
Language UnderstandingMMLU o=1 (Semantic-level)
Accuracy76.6
21
Question AnsweringTruthfulQA Exact split, o=3
Accuracy97.6
21
Question AnsweringTruthfulQA Semantic-level split o=3
Accuracy97.1
21
Question AnsweringTruthfulQA Domain-level split, o=3
Accuracy89.9
21
Question AnsweringTruthfulQA o=1 (Exact split)
Accuracy89
21
Question AnsweringTruthfulQA o=1 Semantic-level
Accuracy89
21
Question AnsweringTruthfulQA o=1 Domain-level split
Accuracy86.5
21
Multitask Language UnderstandingMMLU Semantic-level split, o=3
Accuracy86.8
21
Showing 10 of 12 rows

Other info

Follow for update