Similarity-as-Evidence: Calibrating Overconfident VLMs for Interpretable and Label-Efficient Medical Active Learning
About
Active Learning (AL) reduces annotation costs in medical imaging by selecting only the most informative samples for labeling, but suffers from cold-start when labeled data are scarce. Vision-Language Models (VLMs) address the cold-start problem via zero-shot predictions, yet their temperature-scaled softmax outputs treat text-image similarities as deterministic scores while ignoring inherent uncertainty, leading to overconfidence. This overconfidence misleads sample selection, wasting annotation budgets on uninformative cases. To overcome these limitations, the Similarity-as-Evidence (SaE) framework calibrates text-image similarities by introducing a Similarity Evidence Head (SEH), which reinterprets the similarity vector as evidence and parameterizes a Dirichlet distribution over labels. In contrast to a standard softmax that enforces confident predictions even under weak signals, the Dirichlet formulation explicitly quantifies lack of evidence (vacuity) and conflicting evidence (dissonance), thereby mitigating overconfidence caused by rigid softmax normalization. Building on this, SaE employs a dual-factor acquisition strategy: high-vacuity samples (e.g., rare diseases) are prioritized in early rounds to ensure coverage, while high-dissonance samples (e.g., ambiguous diagnoses) are prioritized later to refine boundaries, providing clinically interpretable selection rationales. Experiments on ten public medical imaging datasets with a 20% label budget show that SaE attains state-of-the-art macro-averaged accuracy of 82.57%. On the representative BTMRI dataset, SaE also achieves superior calibration, with a negative log-likelihood (NLL) of 0.425.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Medical Image Classification | BUSI | -- | 88 | |
| Image Classification | DermaMNIST | Accuracy80.21 | 23 | |
| Medical Image Classification | OCTMNIST | Accuracy79.8 | 19 | |
| Image Classification | Kvasir | Mean Accuracy88.58 | 7 | |
| Image Classification | Retina | Mean Accuracy75.22 | 7 | |
| Image Classification | LC25000 | Mean Accuracy0.9923 | 7 | |
| Image Classification | CHMNIST | Mean Accuracy91.03 | 7 | |
| Image Classification | BTMRI | Mean Accuracy93.46 | 7 | |
| Image Classification | COVID-QU-Ex | Mean Accuracy89.49 | 7 | |
| Image Classification | KneeXray | Mean Accuracy49.5 | 7 |