Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Fairness Constraints: Mechanisms for Fair Classification

About

Algorithmic decision making systems are ubiquitous across a wide variety of online as well as offline services. These systems rely on complex learning methods and vast amounts of data to optimize the service functionality, satisfaction of the end user and profitability. However, there is a growing concern that these automated decisions can lead, even in the absence of intent, to a lack of fairness, i.e., their outcomes can disproportionately hurt (or, benefit) particular groups of people sharing one or more sensitive attributes (e.g., race, sex). In this paper, we introduce a flexible mechanism to design fair classifiers by leveraging a novel intuitive measure of decision boundary (un)fairness. We instantiate this mechanism with two well-known classifiers, logistic regression and support vector machines, and show on real-world data that our mechanism allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy.

Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna P. Gummadi• 2015

Related benchmarks

TaskDatasetResultRank
ClassificationAdultCensus (test)
Accuracy77.6
28
Fair ClassificationCOMPAS (test)
Accuracy52.5
28
Fair ClassificationSynthetic 1.0 (test)
Accuracy70.6
28
ClassificationGroup-Targeted Label Flipping synthetic (test)
Accuracy0.706
12
ClassificationSynthetic Label Flipping (test)
Accuracy69.4
12
Showing 5 of 5 rows

Other info

Follow for update