Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Counterfactual Fairness in Text Classification through Robustness

About

In this paper, we study counterfactual fairness in text classification, which asks the question: How would the prediction change if the sensitive attribute referenced in the example were different? Toxicity classifiers demonstrate a counterfactual fairness issue by predicting that "Some people are gay" is toxic while "Some people are straight" is nontoxic. We offer a metric, counterfactual token fairness (CTF), for measuring this particular form of fairness in text classifiers, and describe its relationship with group fairness. Further, we offer three approaches, blindness, counterfactual augmentation, and counterfactual logit pairing (CLP), for optimizing counterfactual token fairness during training, bridging the robustness and fairness literature. Empirically, we find that blindness and CLP address counterfactual token fairness. The methods do not harm classifier performance, and have varying tradeoffs with group fairness. These approaches, both for measurement and optimization, provide a new path forward for addressing fairness concerns in text classification.

Sahaj Garg, Vincent Perot, Nicole Limtiaco, Ankur Taly, Ed H. Chi, Alex Beutel• 2018

Related benchmarks

TaskDatasetResultRank
Linear regressionLaw School Success (test)
MSE0.8622
12
Fairness ClassificationUCI Adult
Accuracy79.14
12
Fairness metric evaluationBias in Bios (test)
Correlation0.326
6
ClassificationUCI Adult (test)
Accuracy (W)79.11
6
Fairness metric evaluationJigsaw Toxicity (test)
Correlation (Corr.)0.214
6
Logistic regression classificationUCI Adult
Accuracy (Weighted)78.82
6
Linear regressionSynthetic data (test)
MSE0.8598
5
Showing 7 of 7 rows

Other info

Follow for update