Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

GrACE: A Generative Approach to Better Confidence Elicitation and Efficient Test-Time Scaling in Large Language Models

About

Assessing the reliability of Large Language Models (LLMs) by confidence elicitation is a prominent approach to AI safety in high-stakes applications, such as healthcare and finance. Existing methods either require expensive computational overhead or suffer from poor calibration, making them impractical and unreliable for real-world deployment. In this work, we propose GrACE, a Generative Approach to Confidence Elicitation that enables scalable and reliable confidence elicitation for LLMs. GrACE adopts a novel mechanism in which the model expresses confidence by the similarity between the last hidden state and the embedding of a special token appended to the vocabulary, in real-time. We fine-tune the model for calibrating the confidence with targets associated with accuracy. Extensive experiments show that the confidence produced by GrACE achieves the best discriminative capacity and calibration on open-ended generation tasks without resorting to additional sampling or an auxiliary model. Moreover, we propose two confidence-based strategies for test-time scaling with GrACE, which not only improve the accuracy of the final decision but also significantly reduce the number of required samples, highlighting its potential as a practical solution for deploying LLMs with reliable, on-the-fly confidence estimation.

Zhaohan Zhang, Ziquan Liu, Ioannis Patras• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMathQA
Accuracy88.3
305
Open-ended generationSciQ
ECE5.21
21
Open-ended generationTriviaQA
ECE5.94
21
Question AnsweringARC Challenge
Accuracy90.3
10
Showing 4 of 4 rows

Other info

Follow for update