Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ValUES: A Framework for Systematic Validation of Uncertainty Estimation in Semantic Segmentation

About

Uncertainty estimation is an essential and heavily-studied component for the reliable application of semantic segmentation methods. While various studies exist claiming methodological advances on the one hand, and successful application on the other hand, the field is currently hampered by a gap between theory and practice leaving fundamental questions unanswered: Can data-related and model-related uncertainty really be separated in practice? Which components of an uncertainty method are essential for real-world performance? Which uncertainty method works well for which application? In this work, we link this research gap to a lack of systematic and comprehensive evaluation of uncertainty methods. Specifically, we identify three key pitfalls in current literature and present an evaluation framework that bridges the research gap by providing 1) a controlled environment for studying data ambiguities as well as distribution shifts, 2) systematic ablations of relevant method components, and 3) test-beds for the five predominant uncertainty applications: OoD-detection, active learning, failure detection, calibration, and ambiguity modeling. Empirical results on simulated as well as real-world data demonstrate how the proposed framework is able to answer the predominant questions in the field revealing for instance that 1) separation of uncertainty types works on simulated data but does not necessarily translate to real-world data, 2) aggregation of scores is a crucial but currently neglected component of uncertainty methods, 3) While ensembles are performing most robustly across the different downstream tasks and settings, test-time augmentation often constitutes a light-weight alternative. Code is at: https://github.com/IML-DKFZ/values

Kim-Celine Kahl, Carsten T. L\"uth, Maximilian Zenk, Klaus Maier-Hein, Paul F. Jaeger• 2024

Related benchmarks

TaskDatasetResultRank
Failure DetectionARC BC
E-AURC14
48
Failure DetectionARC-Nuc
E-AURC0.05
48
Failure DetectionCAR-CS
E-AURC11
48
Failure DetectionLIDC-Mal
E-AURC0.06
48
Failure DetectionLIDC-Tex
E-AURC18
48
Out-of-Distribution DetectionARC BC
AUROC85
48
Out-of-Distribution DetectionARC-Nuc
AUROC88
48
Out-of-Distribution DetectionLIDC-Mal
AUROC96
48
Out-of-Distribution DetectionLIDC-Tex
AUROC89
48
Failure DetectionWORM-Nem
E-AURC11
48
Showing 10 of 43 rows

Other info

Follow for update