Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Fairness Without Demographics in Repeated Loss Minimization

About

Machine learning models (e.g., speech recognizers) are usually trained to minimize average loss, which results in representation disparity---minority groups (e.g., non-native speakers) contribute less to the training objective and thus tend to suffer higher loss. Worse, as model accuracy affects user retention, a minority group can shrink over time. In this paper, we first show that the status quo of empirical risk minimization (ERM) amplifies representation disparity over time, which can even make initially fair models unfair. To mitigate this, we develop an approach based on distributionally robust optimization (DRO), which minimizes the worst case risk over all distributions close to the empirical distribution. We prove that this approach controls the risk of the minority group at each time step, in the spirit of Rawlsian distributive justice, while remaining oblivious to the identity of the groups. We demonstrate that DRO prevents disparity amplification on examples where ERM fails, and show improvements in minority group user satisfaction in a real-world text autocomplete task.

Tatsunori B. Hashimoto, Megha Srivastava, Hongseok Namkoong, Percy Liang• 2018

Related benchmarks

TaskDatasetResultRank
ClassificationGerman Credit (test)
Accuracy54.9
16
Fair ClassificationGerman Credit (test)
Equal Opportunity Difference62.9
15
ClassificationACSIncome state RI (test)
Avg Accuracy53.3
14
ClassificationACSIncome state AZ
Avg Acc47.1
14
Employment PredictionACSEmployment state LA (test)
AV.ACC50.5
14
Employment PredictionACSEmployment MI (test)
AV.ACC50
14
Employment status predictionACSEmployment 2018 (state IA)
Average Accuracy0.537
14
Income PredictionACSIncome state MT (test)
AV.ACC54.2
14
ClassificationACSEmployment CT (test)
AV.ACC52.3
14
Employment PredictionACSEmployment OR (Oregon) (test)
AV.ACC47.2
14
Showing 10 of 13 rows

Other info

Follow for update