Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Knowing What's Missing: Assessing Information Sufficiency in Question Answering

About

Determining whether a provided context contains sufficient information to answer a question is a critical challenge for building reliable question-answering systems. While simple prompting strategies have shown success on factual questions, they frequently fail on inferential ones that require reasoning beyond direct text extraction. We hypothesize that asking a model to first reason about what specific information is missing provides a more reliable, implicit signal for assessing overall sufficiency. To this end, we propose a structured Identify-then-Verify framework for robust sufficiency modeling. Our method first generates multiple hypotheses about missing information and establishes a semantic consensus. It then performs a critical verification step, forcing the model to re-examine the source text to confirm whether this information is truly absent. We evaluate our method against established baselines across diverse multi-hop and factual QA datasets. The results demonstrate that by guiding the model to justify its claims about missing information, our framework produces more accurate sufficiency judgments while clearly articulating any information gaps.

Akriti Jain, Aparna Garimella• 2025

Related benchmarks

TaskDatasetResultRank
Question Answering Sufficiency PredictionHotpotQA
Accuracy85.24
3
Question Answering Sufficiency Prediction2WikiMultihopQA
Accuracy89.31
3
Question Answering Sufficiency PredictionMuSiQue Ans
Accuracy71.86
3
Question Answering Sufficiency PredictionMuSiQue Full
Accuracy0.7213
3
Question Answering Sufficiency PredictionCouldAsk Benchmark
BBC Score0.7878
3
Question Answering Sufficiency PredictionFaithEval
Accuracy57.61
3
Showing 6 of 6 rows

Other info

Follow for update