Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Bayesian Triplet Loss: Uncertainty Quantification in Image Retrieval

About

Uncertainty quantification in image retrieval is crucial for downstream decisions, yet it remains a challenging and largely unexplored problem. Current methods for estimating uncertainties are poorly calibrated, computationally expensive, or based on heuristics. We present a new method that views image embeddings as stochastic features rather than deterministic features. Our two main contributions are (1) a likelihood that matches the triplet constraint and that evaluates the probability of an anchor being closer to a positive than a negative; and (2) a prior over the feature space that justifies the conventional l2 normalization. To ensure computational efficiency, we derive a variational approximation of the posterior, called the Bayesian triplet loss, that produces state-of-the-art uncertainty estimates and matches the predictive performance of current state-of-the-art methods.

Frederik Warburg, Martin J{\o}rgensen, Javier Civera, S{\o}ren Hauberg• 2020

Related benchmarks

TaskDatasetResultRank
Visual Place Recognition Uncertainty EstimationNordland (test)
AUC-PR0.07
8
Visual Place Recognition Uncertainty EstimationPittsburgh (test)
AUC-PR44
8
Visual Place Recognition Uncertainty EstimationSan Francisco (test)
AUC-PR0.17
8
Visual Place Recognition Uncertainty EstimationSt Lucia (test)
AUC-PR34
8
Visual Place Recognition Uncertainty EstimationEynsham (test)
AUC-PR0.45
8
Visual Place Recognition Uncertainty EstimationMSLS (test)
AUC-PR21
8
Showing 6 of 6 rows

Other info

Follow for update