Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mitigating Test-Time Bias for Fair Image Retrieval

About

We address the challenge of generating fair and unbiased image retrieval results given neutral textual queries (with no explicit gender or race connotations), while maintaining the utility (performance) of the underlying vision-language (VL) model. Previous methods aim to disentangle learned representations of images and text queries from gender and racial characteristics. However, we show these are inadequate at alleviating bias for the desired equal representation result, as there usually exists test-time bias in the target retrieval set. So motivated, we introduce a straightforward technique, Post-hoc Bias Mitigation (PBM), that post-processes the outputs from the pre-trained vision-language model. We evaluate our algorithm on real-world image search datasets, Occupation 1 and 2, as well as two large-scale image-text datasets, MS-COCO and Flickr30k. Our approach achieves the lowest bias, compared with various existing bias-mitigation methods, in text-based image retrieval result while maintaining satisfactory retrieval performance. The source code is publicly available at \url{https://anonymous.4open.science/r/Fair_Text_based_Image_Retrieval-D8B2}.

Fanjie Kong, Shuai Yuan, Weituo Hao, Ricardo Henao• 2023

Related benchmarks

TaskDatasetResultRank
Image RetrievalFlickr30K
Recall@585.3
21
Image RetrievalUTKFace (test)
White18.6
18
Image RetrievalUTKFace
White Group Score39.2
15
Image RetrievalMS COCO 1K
R@137.3
13
Debiased image retrievalOccupation 1 Gender (test)
Absolute Bias @ 1000.00e+0
10
Debiased image retrievalOccupation Gender 2 (test)
AbsBias@1000.00e+0
10
Debiased image retrievalOccupation Race 2 (test)
Absolute Bias@1000.00e+0
10
Image RetrievalCOCO 5k
Bias@10.0492
5
Showing 8 of 8 rows

Other info

Code

Follow for update