Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Prototype-based Aleatoric Uncertainty Quantification for Cross-modal Retrieval

About

Cross-modal Retrieval methods build similarity relations between vision and language modalities by jointly learning a common representation space. However, the predictions are often unreliable due to the Aleatoric uncertainty, which is induced by low-quality data, e.g., corrupt images, fast-paced videos, and non-detailed texts. In this paper, we propose a novel Prototype-based Aleatoric Uncertainty Quantification (PAU) framework to provide trustworthy predictions by quantifying the uncertainty arisen from the inherent data ambiguity. Concretely, we first construct a set of various learnable prototypes for each modality to represent the entire semantics subspace. Then Dempster-Shafer Theory and Subjective Logic Theory are utilized to build an evidential theoretical framework by associating evidence with Dirichlet Distribution parameters. The PAU model induces accurate uncertainty and reliable predictions for cross-modal retrieval. Extensive experiments are performed on four major benchmark datasets of MSR-VTT, MSVD, DiDeMo, and MS-COCO, demonstrating the effectiveness of our method. The code is accessible at https://github.com/leolee99/PAU.

Hao Li, Jingkuan Song, Lianli Gao, Xiaosu Zhu, Heng Tao Shen• 2023

Related benchmarks

TaskDatasetResultRank
Text-to-Video RetrievalDiDeMo (test)
R@148.6
376
Text-to-Video RetrievalDiDeMo
R@10.486
360
Image-to-Text RetrievalMS-COCO 5K (test)
R@163.6
299
Text-to-Video RetrievalMSR-VTT (test)
R@148.5
234
Text-to-Image RetrievalMS-COCO 5K (test)
R@146.8
223
Text-to-Video RetrievalMSVD
R@147.3
218
Text-to-Video RetrievalMSVD (test)
R@147.3
204
Image-to-Text RetrievalMS-COCO 1K (test)
R@180.4
121
Video-to-Text retrievalDiDeMo (test)
R@148.1
92
Video-to-Text retrievalMSVD (test)
R@168.9
61
Showing 10 of 10 rows

Other info

Code

Follow for update