Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Quantifying Genuine Awareness in Hallucination Prediction Beyond Question-Side Shortcuts

About

Many works have proposed methodologies for language model (LM) hallucination detection and reported seemingly strong performance. However, we argue that the reported performance to date reflects not only a model's genuine awareness of its internal information, but also awareness derived purely from question-side information (e.g., benchmark hacking). While benchmark hacking can be effective for boosting hallucination detection score on existing benchmarks, it does not generalize to out-of-domain settings and practical usage. Nevertheless, disentangling how much of a model's hallucination detection performance arises from question-side awareness is non-trivial. To address this, we propose a methodology for measuring this effect without requiring human labor, Approximate Question-side Effect (AQE). Our analysis using AQE reveals that existing hallucination detection methods rely heavily on benchmark hacking.

Yeongbin Seo, Dongha Lee, Jinyoung Yeo• 2025

Related benchmarks

TaskDatasetResultRank
Hallucination PredictionMintaka refined by question type
AUROC77.89
20
Hallucination PredictionMintaka refined by question type and domain
AUROC75.51
20
Hallucination PredictionExplain original
Accuracy80.91
20
Hallucination PredictionExplain (+ domain)
Accuracy64.87
20
Hallucination PredictionExplain unrefined (original)
AUROC85.42
10
Hallucination PredictionExplain domain refined
AUROC70.04
10
Hallucination PredictionMintaka unrefined (original)
AUROC79.41
10
Hallucination PredictionMintaka (original)
AUROC79.41
10
Hallucination PredictionParaRel + domain
Accuracy69.24
10
Hallucination PredictionHotpotQA (original)
AUROC83.39
10
Showing 10 of 12 rows

Other info

Follow for update