Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Post-hoc Probabilistic Vision-Language Models

About

Vision-language models (VLMs), such as CLIP and SigLIP, have found remarkable success in classification, retrieval, and generative tasks. For this, VLMs deterministically map images and text descriptions to a joint latent space in which their similarity is assessed using the cosine similarity. However, a deterministic mapping of inputs fails to capture uncertainties over concepts arising from domain shifts when used in downstream tasks. In this work, we propose post-hoc uncertainty estimation in VLMs that does not require additional training. Our method leverages a Bayesian posterior approximation over the last layers in VLMs and analytically quantifies uncertainties over cosine similarities. We demonstrate its effectiveness for uncertainty quantification and support set selection in active learning. Compared to baselines, we obtain improved and well-calibrated predictive uncertainties, interpretable uncertainty estimates, and sample-efficient active learning. Our results show promise for safety-critical applications of large-scale models.

Anton Baumann, Rui Li, Marcus Klasson, Santeri Mentu, Shyamgopal Karthik, Zeynep Akata, Arno Solin, Martin Trapp• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100
Top-1 Accuracy76.77
622
Image ClassificationFood-101
Accuracy87.2
494
ClassificationCIFAR-10
Accuracy93.62
80
Out-of-Distribution DetectionDTD
AUROC80.06
36
Error detectionImageNet
AuROC79.41
35
Error detectionEuroSAT
AuROC70.54
27
Error detectionFlowers102
AuROC85.65
27
Error detectionFood101
AuROC86.12
27
Image ClassificationImageNet-Sketch--
10
Zero-shot ClassificationCIFAR100--
10
Showing 10 of 24 rows

Other info

Follow for update