Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning

About

Vision-language models can encode societal biases and stereotypes, but there are challenges to measuring and mitigating these multimodal harms due to lacking measurement robustness and feature degradation. To address these challenges, we investigate bias measures and apply ranking metrics for image-text representations. We then investigate debiasing methods and show that prepending learned embeddings to text queries that are jointly trained with adversarial debiasing and a contrastive loss reduces various bias measures with minimal degradation to the image-text representation.

Hugo Berg, Siobhan Mackenzie Hall, Yash Bhalgat, Wonsuk Yang, Hannah Rose Kirk, Aleksandar Shtedritski, Max Bain• 2022

Related benchmarks

TaskDatasetResultRank
Image RetrievalUTKFace (test)
White19.4
18
Image RetrievalUTKFace
White Group Score39.8
15
CXR DiagnosisCheXpert Plus Race attribute (test)
Accuracy59.33
10
CXR DiagnosisCheXpert Plus Gender attribute (test)
Accuracy56
10
Debiased image retrievalOccupation Gender 2 (test)
AbsBias@1000.3564
10
Debiased image retrievalOccupation Race 2 (test)
Absolute Bias@1000.4946
10
Debiased image retrievalOccupation 1 Gender (test)
Absolute Bias @ 1000.6373
10
Fair Image RetrievalCelebA (test)
KL Divergence0.066
9
Bias Mitigation for Stereotype QueriesUTKFACE Race
KL Divergence0.158
9
Bias Mitigation for Stereotype QueriesUTKFACE Gender
KL Divergence0.091
9
Showing 10 of 11 rows

Other info

Follow for update