A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning
About
Vision-language models can encode societal biases and stereotypes, but there are challenges to measuring and mitigating these multimodal harms due to lacking measurement robustness and feature degradation. To address these challenges, we investigate bias measures and apply ranking metrics for image-text representations. We then investigate debiasing methods and show that prepending learned embeddings to text queries that are jointly trained with adversarial debiasing and a contrastive loss reduces various bias measures with minimal degradation to the image-text representation.
Hugo Berg, Siobhan Mackenzie Hall, Yash Bhalgat, Wonsuk Yang, Hannah Rose Kirk, Aleksandar Shtedritski, Max Bain• 2022
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Image Retrieval | Flickr30K | R@179.02 | 531 | |
| Bias Mitigation for Stereotype Queries | UTKFACE Gender | KL Divergence0.091 | 33 | |
| Bias Mitigation for Stereotype Queries | UTKFACE Race | KL Divergence0.158 | 33 | |
| Text-to-Image Retrieval | COCO 2017 | Recall@590.4 | 24 | |
| Multi-class classification | FACET | Accuracy56.37 | 18 | |
| Image Retrieval | UTKFace (test) | White19.4 | 18 | |
| Image Retrieval | UTKFace | White Group Score39.8 | 15 | |
| CXR Diagnosis | CheXpert Plus Race attribute (test) | Accuracy59.33 | 10 | |
| CXR Diagnosis | CheXpert Plus Gender attribute (test) | Accuracy56 | 10 | |
| Debiased image retrieval | Occupation Gender 2 (test) | AbsBias@1000.3564 | 10 |
Showing 10 of 16 rows