Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Double-Calibration: Towards Trustworthy LLMs via Calibrating Knowledge and Reasoning Confidence

About

Trustworthy reasoning in Large Language Models (LLMs) is challenged by their propensity for hallucination. While augmenting LLMs with Knowledge Graphs (KGs) improves factual accuracy, existing KG-augmented methods fail to quantify epistemic uncertainty in both the retrieved evidence and LLMs' reasoning. To bridge this gap, we introduce DoublyCal, a framework built on a novel double-calibration principle. DoublyCal employs a lightweight proxy model to first generate KG evidence alongside a calibrated evidence confidence. This calibrated supporting evidence then guides a black-box LLM, yielding final predictions that are not only more accurate but also well-calibrated, with confidence scores traceable to the uncertainty of the supporting evidence. Experiments on knowledge-intensive benchmarks show that DoublyCal significantly improves both the accuracy and confidence calibration of black-box LLMs with low token cost.

Yuyin Lu, Ziran Liang, Yanghui Rao, Wenqi Fan, Fu Lee Wang, Qing Li• 2026

Related benchmarks

TaskDatasetResultRank
Knowledge Graph Question AnsweringCWQ (test)
Hits@171.3
69
Knowledge Graph Question AnsweringWEBQSP (test)
Hit91.5
30
Showing 2 of 2 rows

Other info

Follow for update