Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

FairGBM: Gradient Boosting with Fairness Constraints

About

Tabular data is prevalent in many high-stakes domains, such as financial services or public policy. Gradient Boosted Decision Trees (GBDT) are popular in these settings due to their scalability, performance, and low training cost. While fairness in these domains is a foremost concern, existing in-processing Fair ML methods are either incompatible with GBDT, or incur in significant performance losses while taking considerably longer to train. We present FairGBM, a dual ascent learning framework for training GBDT under fairness constraints, with little to no impact on predictive performance when compared to unconstrained GBDT. Since observational fairness metrics are non-differentiable, we propose smooth convex error rate proxies for common fairness criteria, enabling gradient-based optimization using a ``proxy-Lagrangian'' formulation. Our implementation shows an order of magnitude speedup in training time relative to related work, a pivotal aspect to foster the widespread adoption of FairGBM by real-world practitioners.

Andr\'e F Cruz, Catarina Bel\'em, S\'ergio Jesus, Jo\~ao Bravo, Pedro Saleiro, Pedro Bizarro• 2022

Related benchmarks

TaskDatasetResultRank
Binary ClassificationIncome (test)
Test Accuracy86.08
34
Fairness evaluationppvr (test)
PP53.24
14
Fairness Classificationppvr (test)
Demographic Parity0.0357
14
ClassificationPPR race (test)
F1 Score61.77
14
Fairness Classificationppr (test)
Demographic Parity8.53
14
Fairness evaluationIncome (test)
PP2.74
14
ClassificationIncome (test)
Equality of Opportunity7.6
14
ClassificationCredit (test)
EOpp0.0579
14
Fairness ClassificationCredit (test)
Disparate Impact (DP)0.113
14
Fairness evaluationppr (test)
PP9.3
14
Showing 10 of 27 rows

Other info

Follow for update