Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CLEAR: Calibrated Learning for Epistemic and Aleatoric Risk

About

Accurate uncertainty quantification is critical for reliable predictive modeling. Existing methods typically address either aleatoric uncertainty due to measurement noise or epistemic uncertainty resulting from limited data, but not both in a balanced manner. We propose CLEAR, a calibration method with two distinct parameters, $\gamma_1$ and $\gamma_2$, to combine the two uncertainty components and improve the conditional coverage of predictive intervals for regression tasks. CLEAR is compatible with any pair of aleatoric and epistemic estimators; we show how it can be used with (i) quantile regression for aleatoric uncertainty and (ii) ensembles drawn from the Predictability-Computability-Stability (PCS) framework for epistemic uncertainty. Across 17 diverse real-world datasets, CLEAR achieves an average improvement of 28.3\% and 17.5\% in the interval width compared to the two individually calibrated baselines while maintaining nominal coverage. Similar improvements are observed when applying CLEAR to Deep Ensembles (epistemic) and Simultaneous Quantile Regression (aleatoric). The benefits are especially evident in scenarios dominated by high aleatoric or epistemic uncertainty. Project page: https://unco3892.github.io/clear/

Ilia Azizi, Juraj Bodik, Jakob Heiss, Bin Yu• 2025

Related benchmarks

TaskDatasetResultRank
RegressionCA Housing--
45
RegressionAirfoil
NCIW0.173
22
RegressionKin8nm
NCIW0.194
22
RegressionParkinsons
NCIW0.25
22
Regressionallstate
NCIW0.333
22
Regressionsuperconductor
NCIW20.9
22
Regressionelevator
NCIW0.117
19
RegressionInsurance
NCIW0.345
17
Regressionailerons
NCIW0.205
17
Regressionqsar
NCIW0.41
17
Showing 10 of 99 rows
...

Other info

Follow for update