Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Non-Parametric Calibration for Classification

About

Many applications of classification methods not only require high accuracy but also reliable estimation of predictive uncertainty. However, while many current classification frameworks, in particular deep neural networks, achieve high accuracy, they tend to incorrectly estimate uncertainty. In this paper, we propose a method that adjusts the confidence estimates of a general classifier such that they approach the probability of classifying correctly. In contrast to existing approaches, our calibration method employs a non-parametric representation using a latent Gaussian process, and is specifically designed for multi-class classification. It can be applied to any classifier that outputs confidence estimates and is not limited to neural networks. We also provide a theoretical analysis regarding the over- and underconfidence of a classifier and its relationship to calibration, as well as an empirical outlook for calibrated active learning. In experiments we show the universally strong performance of our method across different classifiers and benchmark data sets, in particular for state-of-the art neural network architectures.

Jonathan Wenger, Hedvig Kjellstr\"om, Rudolph Triebel• 2019

Related benchmarks

TaskDatasetResultRank
Long-Tailed Image ClassificationImageNet-LT (test)--
220
Confidence calibrationCIFAR-100-LT (test)
ECE0.0273
53
Model CalibrationCIFAR-10
ECE2.01
40
Model CalibrationSVHN
ECE2.32
40
CalibrationMNIST
ECE0.41
33
Uncertainty CalibrationCIFAR-10.1 C
ECE31.37
27
Uncertainty CalibrationCIFAR-10.1
ECE0.0466
27
Uncertainty CalibrationCIFAR-F
ECE7.1
27
CalibrationDigital-S
ECE8.24
27
CalibrationUSPS
ECE5.88
27
Showing 10 of 10 rows

Other info

Follow for update