Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Achieving Fairness at No Utility Cost via Data Reweighing with Influence

About

With the fast development of algorithmic governance, fairness has become a compulsory property for machine learning models to suppress unintentional discrimination. In this paper, we focus on the pre-processing aspect for achieving fairness, and propose a data reweighing approach that only adjusts the weight for samples in the training phase. Different from most previous reweighing methods which usually assign a uniform weight for each (sub)group, we granularly model the influence of each training sample with regard to fairness-related quantity and predictive utility, and compute individual weights based on influence under the constraints from both fairness and utility. Experimental results reveal that previous methods achieve fairness at a non-negligible cost of utility, while as a significant advantage, our approach can empirically release the tradeoff and obtain cost-free fairness for equal opportunity. We demonstrate the cost-free fairness through vanilla classifiers and standard training processes, compared to baseline methods on multiple real-world tabular datasets. Code available at https://github.com/brandeis-machine-learning/influence-fairness.

Peizhao Li, Hongfu Liu• 2022

Related benchmarks

TaskDatasetResultRank
ClassificationAdult
Accuracy82.6
27
ClassificationCOMM
Accuracy81.95
20
ClassificationGerman
Delta DP0.0054
20
Fair ClassificationAdult
Delta DP-0.0504
16
Fair ClassificationCOMPAS
DP Disparity0.1188
16
Fair ClassificationCOMM
Delta DP0.337
15
ClassificationCOMPAS
Accuracy64.96
15
ClassificationGerman
Acc66
15
ClassificationAdult
Delta DP15.96
7
ClassificationCOMPAS
Delta DP0.103
7
Showing 10 of 10 rows

Other info

Follow for update