Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improving Gender Fairness of Pre-Trained Language Models without Catastrophic Forgetting

About

Existing studies addressing gender bias of pre-trained language models, usually build a small gender-neutral data set and conduct a second phase pre-training on the model with such data. However, given the limited size and concentrated focus of the gender-neutral data, catastrophic forgetting would occur during second-phase pre-training. Forgetting information in the original training data may damage the model's downstream performance by a large margin. In this work, we empirically show that catastrophic forgetting occurs in such methods by evaluating them with general NLP tasks in GLUE. Then, we propose a new method, GEnder Equality Prompt (GEEP), to improve gender fairness of pre-trained models with less forgetting. GEEP freezes the pre-trained model and learns gender-related prompts with gender-neutral data. Empirical results show that GEEP not only achieves SOTA performances on gender fairness tasks, but also forgets less and performs better on GLUE by a large margin.

Zahra Fatemi, Chen Xing, Wenhao Liu, Caiming Xiong• 2021

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE
SST-292.4
452
Coreference ResolutionWSC
Accuracy50.5
96
Pronoun DisambiguationWSC (test)--
14
Coreference ResolutionWinogender
Accuracy62.9
3
Coreference ResolutionDPR WSCR
Accuracy52.8
3
Pronoun Coreference ResolutionWinogender (test)
Accuracy64.5
3
Pronoun Coreference ResolutionDPR WSCR (test)
Accuracy53.6
3
Natural Language UnderstandingGLUE 2018 (test dev)
MNLI87.7
3
Showing 8 of 8 rows

Other info

Code

Follow for update