Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Generalized Regularized Evidential Deep Learning Models: Theory and Comprehensive Evaluation

About

Evidential deep learning (EDL) models, based on Subjective Logic, introduce a principled and computationally efficient way to make deterministic neural networks uncertainty-aware. The resulting evidential models can quantify fine-grained uncertainty using learned evidence. However, the Subjective-Logic framework constrains evidence to be non-negative, requiring specific activation functions whose geometric properties can induce activation-dependent learning-freeze behavior: a regime where gradients become extremely small for samples mapped into low-evidence regions. We theoretically characterize this behavior and analyze how different evidential activations influence learning dynamics. Building on this analysis, we design a general family of activation functions and corresponding evidential regularizers that provide an alternative pathway for consistent evidence updates across activation regimes. Extensive experiments on four benchmark classification problems (MNIST, CIFAR-10, CIFAR-100, and Tiny-ImageNet), two few-shot classification problems, and blind face restoration problem empirically validate the developed theory and demonstrate the effectiveness of the proposed generalized regularized evidential models.

Deep Shankar Pandey, Hyomin Choi, Qi Yu• 2025

Related benchmarks

TaskDatasetResultRank
Blind Face RestorationCelebA
PSNR22.33
10
Showing 1 of 1 rows

Other info

Follow for update