Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Decoupling Feature Extraction and Classification Layers for Calibrated Neural Networks

About

Deep Neural Networks (DNN) have shown great promise in many classification applications, yet are widely known to have poorly calibrated predictions when they are over-parametrized. Improving DNN calibration without comprising on model accuracy is of extreme importance and interest in safety critical applications such as in the health-care sector. In this work, we show that decoupling the training of feature extraction layers and classification layers in over-parametrized DNN architectures such as Wide Residual Networks (WRN) and Visual Transformers (ViT) significantly improves model calibration whilst retaining accuracy, and at a low training cost. In addition, we show that placing a Gaussian prior on the last hidden layer outputs of a DNN, and training the model variationally in the classification training stage, even further improves calibration. We illustrate these methods improve calibration across ViT and WRN architectures for several image classification benchmark datasets.

Mikkel Jordahn, Pablo M. Olmos• 2024

Related benchmarks

TaskDatasetResultRank
OOD DetectionCIFAR-100 standard (test)
AUROC (%)73.42
94
Image Classification CalibrationCIFAR100
Classwise ECE0.0109
90
OOD DetectionSVHN (test)
AUROC0.9373
84
Model CalibrationCIFAR-100
ECE7.07
81
Model CalibrationCIFAR-10
ECE1.52
68
Model CalibrationSVHN
ECE0.43
40
OOD DetectionCIFAR-10 (test)
AUROC88.47
40
Model CalibrationTiny-ImageNet
Expected Calibration Error2.91
32
Model CalibrationCIFAR-10, CIFAR-100, and SVHN
Average ECE3.11
13
Image Classification CalibrationImageNet
Accuracy81.04
6
Showing 10 of 10 rows

Other info

Follow for update