Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Uncertainty Sets for Image Classifiers using Conformal Prediction

About

Convolutional image classifiers can achieve high predictive accuracy, but quantifying their uncertainty remains an unresolved challenge, hindering their deployment in consequential settings. Existing uncertainty quantification techniques, such as Platt scaling, attempt to calibrate the network's probability estimates, but they do not have formal guarantees. We present an algorithm that modifies any classifier to output a predictive set containing the true label with a user-specified probability, such as 90%. The algorithm is simple and fast like Platt scaling, but provides a formal finite-sample coverage guarantee for every model and dataset. Our method modifies an existing conformal prediction algorithm to give more stable predictive sets by regularizing the small scores of unlikely classes after Platt scaling. In experiments on both Imagenet and Imagenet-V2 with ResNet-152 and other classifiers, our scheme outperforms existing approaches, achieving coverage with sets that are often factors of 5 to 10 smaller than a stand-alone Platt scaling baseline.

Anastasios Angelopoulos, Stephen Bates, Jitendra Malik, Michael I. Jordan• 2020

Related benchmarks

TaskDatasetResultRank
Conformal InferenceAverage across 15 datasets (test)
Top-1 Accuracy79.4
60
Conformal PredictionImageNet--
54
Node ClassificationCoraFull (test)--
33
Conformal PredictionImageNet ILSVRC2012 (test)
Avg Prediction Set Size2.548
18
Conformal PredictioniNaturalist (test)
Avg Prediction Set Size2.914
18
Conformal PredictionCUB-Birds (test)
Avg Prediction Set Size2.038
18
Conformal PredictionCIFAR-100
Avg Prediction Set Size8.6
17
Conformal Prediction15 datasets (average)
Top-1 Accuracy63.8
15
Conformal PredictionAverage across 15 datasets (test)
Top-1 Acc63.8
12
Conformal PredictionPlaces365 alpha=0.05 (test)
Set Size21.46
12
Showing 10 of 34 rows

Other info

Follow for update