Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Fair Feature Distillation for Visual Recognition

About

Fairness is becoming an increasingly crucial issue for computer vision, especially in the human-related decision systems. However, achieving algorithmic fairness, which makes a model produce indiscriminative outcomes against protected groups, is still an unresolved problem. In this paper, we devise a systematic approach which reduces algorithmic biases via feature distillation for visual recognition tasks, dubbed as MMD-based Fair Distillation (MFD). While the distillation technique has been widely used in general to improve the prediction accuracy, to the best of our knowledge, there has been no explicit work that also tries to improve fairness via distillation. Furthermore, We give a theoretical justification of our MFD on the effect of knowledge distillation and fairness. Throughout the extensive experiments, we show our MFD significantly mitigates the bias against specific minorities without any loss of the accuracy on both synthetic and real-world face datasets.

Sangwon Jung, Donggyu Lee, Taeeon Park, Taesup Moon• 2021

Related benchmarks

TaskDatasetResultRank
Facial Attribute ClassificationCelebA
Accuracy80
163
ClassificationCelebA
Avg Accuracy80
137
ClassificationCelebA (test)
Average Accuracy78
92
Facial Attribute ClassificationCelebA (test)
Average Acc80.15
89
Image ClassificationCIFAR-10S (test)
Accuracy82.77
17
Age ClassificationUTKFace (test)
Accuracy74.69
12
Facial Attribute ClassificationCelebA
EO (T=a/S=m)7.4
8
Showing 7 of 7 rows

Other info

Follow for update