A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning
About
Vision-language models can encode societal biases and stereotypes, but there are challenges to measuring and mitigating these multimodal harms due to lacking measurement robustness and feature degradation. To address these challenges, we investigate bias measures and apply ranking metrics for image-text representations. We then investigate debiasing methods and show that prepending learned embeddings to text queries that are jointly trained with adversarial debiasing and a contrastive loss reduces various bias measures with minimal degradation to the image-text representation.
Hugo Berg, Siobhan Mackenzie Hall, Yash Bhalgat, Wonsuk Yang, Hannah Rose Kirk, Aleksandar Shtedritski, Max Bain• 2022
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Retrieval | UTKFace (test) | White19.4 | 18 | |
| Image Retrieval | UTKFace | White Group Score39.8 | 15 | |
| CXR Diagnosis | CheXpert Plus Race attribute (test) | Accuracy59.33 | 10 | |
| CXR Diagnosis | CheXpert Plus Gender attribute (test) | Accuracy56 | 10 | |
| Debiased image retrieval | Occupation Gender 2 (test) | AbsBias@1000.3564 | 10 | |
| Debiased image retrieval | Occupation Race 2 (test) | Absolute Bias@1000.4946 | 10 | |
| Debiased image retrieval | Occupation 1 Gender (test) | Absolute Bias @ 1000.6373 | 10 | |
| Fair Image Retrieval | CelebA (test) | KL Divergence0.066 | 9 | |
| Bias Mitigation for Stereotype Queries | UTKFACE Race | KL Divergence0.158 | 9 | |
| Bias Mitigation for Stereotype Queries | UTKFACE Gender | KL Divergence0.091 | 9 |
Showing 10 of 11 rows