Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

To Trust Or Not To Trust A Classifier

About

Knowing when a classifier's prediction can be trusted is useful in many applications and critical for safely using AI. While the bulk of the effort in machine learning research has been towards improving classifier performance, understanding when a classifier's predictions should and should not be trusted has received far less attention. The standard approach is to use the classifier's discriminant or confidence score; however, we show there exists an alternative that is more effective in many situations. We propose a new score, called the trust score, which measures the agreement between the classifier and a modified nearest-neighbor classifier on the testing example. We show empirically that high (low) trust scores produce surprisingly high precision at identifying correctly (incorrectly) classified examples, consistently outperforming the classifier's confidence score as well as many other baselines. Further, under some mild distributional assumptions, we show that if the trust score for an example is high (low), the classifier will likely agree (disagree) with the Bayes-optimal classifier. Our guarantees consist of non-asymptotic rates of statistical consistency under various nonparametric settings and build on recent developments in topological data analysis.

Heinrich Jiang, Been Kim, Melody Y. Guan, Maya Gupta• 2018

Related benchmarks

TaskDatasetResultRank
Error detectionCIFAR10-C
F1 Score0.568
10
Error detectionDigits
F1 Score49.6
10
Error detectionAmazon Review
F1 Score0.414
10
Error detectionOffice-31
F1 Score55.9
10
Error detectioniWILDCam
F1 Score73.7
10
Trustworthiness PredictionMNIST (val)
Acc99.1
6
Trustworthiness PredictionCIFAR-10 (val)
Accuracy92.19
6
Showing 7 of 7 rows

Other info

Follow for update