Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

VOLTA: The Surprising Ineffectiveness of Auxiliary Losses for Calibrated Deep Learning

About

Uncertainty quantification (UQ) is essential for deploying deep learning models in safety critical applications, yet no consensus exists on which UQ method performs best across different data modalities and distribution shifts. This paper presents a comprehensive benchmark of ten widely used UQ baselines including MC Dropout, SWAG, ensemble methods, temperature scaling, energy based OOD, Mahalanobis, hyperbolic classifiers, ENN, Taylor Sensus, and split conformal prediction against a simplified yet highly effective variant of VOLTA that retains only a deep encoder, learnable prototypes, cross entropy loss, and post hoc temperature scaling. We evaluate all methods on CIFAR 10 (in distribution), CIFAR 100, SVHN, uniform noise (out of distribution), CIFAR 10 C (corruptions), and Tiny ImageNet features (tabular). VOLTA achieves competitive or superior accuracy (up to 0.864 on CIFAR 10), significantly lower expected calibration error (0.010 vs. 0.044 to 0.102 for baselines), and strong OOD detection (AUROC 0.802). Statistical testing over three random seeds shows that VOLTA matches or outperforms most baselines, with ablation studies confirming the importance of adaptive temperature and deep encoders. Our results establish VOLTA as a lightweight, deterministic, and well calibrated alternative to more complex UQ approaches.

Rahul D Ray, Utkarsh Srivastava• 2026

Related benchmarks

TaskDatasetResultRank
Out-of-Distribution DetectionCIFAR-10 vs CIFAR-100
AUROC80.23
70
Image ClassificationCIFAR-10
Accuracy87.7
10
Showing 2 of 2 rows

Other info

Follow for update