Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Are Generative Models Underconfident? Better Quality Estimation with Boosted Model Probability

About

Quality Estimation (QE) is estimating quality of the model output during inference when the ground truth is not available. Deriving output quality from the models' output probability is the most trivial and low-effort way. However, we show that the output probability of text-generation models can appear underconfident. At each output step, there can be multiple correct options, making the probability distribution spread out more. Thus, lower probability does not necessarily mean lower output quality. Due to this observation, we propose a QE approach called BoostedProb, which boosts the model's confidence in cases where there are multiple viable output options. With no increase in complexity, BoostedProb is notably better than raw model probability in different settings, achieving on average +0.194 improvement in Pearson correlation to ground-truth quality. It also comes close to or outperforms more costly approaches like supervised or ensemble-based QE in certain settings.

Tu Anh Dinh, Jan Niehues• 2025

Related benchmarks

TaskDatasetResultRank
Quality EstimationPAWS-X (test)
BCE1.256
36
Quality EstimationWMT EN-DE 22
Pearson R0.367
15
Quality EstimationWMT en-es 24
Pearson Correlation0.508
8
Quality EstimationParaCrawl
Pearson Correlation0.235
8
Quality EstimationWMT en-de 24
Pearson Correlation0.414
8
Word-level Quality EstimationHJQE WMT20 QE Shared Task (test)
BCE Loss0.829
8
Quality EstimationWMT en-ko 24
Pearson Correlation0.595
4
Quality EstimationWMT en-fr 24
Pearson Correlation0.37
4
Quality EstimationWMT en-pt 24
Pearson Correlation0.458
4
Quality EstimationWMT en-nl 24
Pearson Correlation0.419
4
Showing 10 of 15 rows

Other info

Follow for update