Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Fairness without Demographics through Adversarially Reweighted Learning

About

Much of the previous machine learning (ML) fairness literature assumes that protected features such as race and sex are present in the dataset, and relies upon them to mitigate fairness concerns. However, in practice factors like privacy and regulation often preclude the collection of protected features, or their use for training or inference, severely limiting the applicability of traditional fairness research. Therefore we ask: How can we train an ML model to improve fairness when we do not even know the protected group memberships? In this work we address this problem by proposing Adversarially Reweighted Learning (ARL). In particular, we hypothesize that non-protected features and task labels are valuable for identifying fairness issues, and can be used to co-train an adversarial reweighting approach for improving fairness. Our results show that {ARL} improves Rawlsian Max-Min fairness, with notable AUC improvements for worst-case protected groups in multiple datasets, outperforming state-of-the-art alternatives.

Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, Ed H. Chi• 2020

Related benchmarks

TaskDatasetResultRank
ClassificationBank
Accuracy69.8
25
ClassificationAdult
Accuracy82.6
21
ClassificationCOMM
Accuracy81.95
20
ClassificationGerman
Delta DP0.1383
20
ClassificationMEPS
AUC83.52
19
ClassificationLSAC
AUC0.8662
19
ClassificationGerman Credit (test)
Accuracy74
16
Fair ClassificationAdult
Delta DP0.1991
16
Fair ClassificationCOMPAS
DP Disparity0.1512
16
ClassificationCOMPAS
Accuracy65.37
15
Showing 10 of 25 rows

Other info

Follow for update