Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

In Defense of Softmax Parametrization for Calibrated and Consistent Learning to Defer

About

Enabling machine learning classifiers to defer their decision to a downstream expert when the expert is more accurate will ensure improved safety and performance. This objective can be achieved with the learning-to-defer framework which aims to jointly learn how to classify and how to defer to the expert. In recent studies, it has been theoretically shown that popular estimators for learning to defer parameterized with softmax provide unbounded estimates for the likelihood of deferring which makes them uncalibrated. However, it remains unknown whether this is due to the widely used softmax parameterization and if we can find a softmax-based estimator that is both statistically consistent and possesses a valid probability estimator. In this work, we first show that the cause of the miscalibrated and unbounded estimator in prior literature is due to the symmetric nature of the surrogate losses used and not due to softmax. We then propose a novel statistically consistent asymmetric softmax-based surrogate loss that can produce valid estimates without the issue of unboundedness. We further analyze the non-asymptotic properties of our method and empirically validate its performance and calibration on benchmark datasets.

Yuzhou Cao, Hussein Mozannar, Lei Feng, Hongxin Wei, Bo An• 2023

Related benchmarks

TaskDatasetResultRank
Learning to DeferCIFAR100 (test)
Error Rate19.3
24
Airspace OpacityNIH Chest X-ray (test)
Error Rate10.06
4
Learning to DeferHateSpeech
Error8.06
4
Learning to DeferImageNet-16H
Error12.59
4
Learning to DeferCIFAR-10H (test)
Error Rate3.65
4
Showing 5 of 5 rows

Other info

Follow for update