Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Discovering and Mitigating Visual Biases through Keyword Explanation

About

Addressing biases in computer vision models is crucial for real-world AI deployments. However, mitigating visual biases is challenging due to their unexplainable nature, often identified indirectly through visualization or sample statistics, which necessitates additional human supervision for interpretation. To tackle this issue, we propose the Bias-to-Text (B2T) framework, which interprets visual biases as keywords. Specifically, we extract common keywords from the captions of mispredicted images to identify potential biases in the model. We then validate these keywords by measuring their similarity to the mispredicted images using a vision-language scoring model. The keyword explanation form of visual bias offers several advantages, such as a clear group naming for bias discovery and a natural extension for debiasing using these group names. Our experiments demonstrate that B2T can identify known biases, such as gender bias in CelebA, background bias in Waterbirds, and distribution shifts in ImageNet-R/C. Additionally, B2T uncovers novel biases in larger datasets, such as Dollar Street and ImageNet. For example, we discovered a contextual bias between "bee" and "flower" in ImageNet. We also highlight various applications of B2T keywords, including debiased training, CLIP prompting, and model comparison.

Younghyun Kim, Sangwoo Mo, Minkyu Kim, Kyungmin Lee, Jaeho Lee, Jinwoo Shin• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationWaterbirds (test)
Worst-Group Accuracy90.7
92
Blond Hair classificationCelebA (test)
Average Group Accuracy93.2
30
Bias Detection AgreementImageNet-X (val)
Agreement Score0.2
12
Bias DetectionImageNet-X 1.0 (val)
GT → Detected HIT2.55
12
Bias DetectionCelebA (val)
HIT Rate6.59
6
Multi-class debiasingMetaShift 10-class
Worst-group Acc (p=12%)70.08
3
Multi-class debiasingMetaShift 2-class
Worst-group Accuracy (p=12%)74.54
3
Bias Detection AgreementCelebA (val)
VQA Agreement Score10
3
Showing 8 of 8 rows

Other info

Code

Follow for update