Improving Gender Fairness of Pre-Trained Language Models without Catastrophic Forgetting
About
Existing studies addressing gender bias of pre-trained language models, usually build a small gender-neutral data set and conduct a second phase pre-training on the model with such data. However, given the limited size and concentrated focus of the gender-neutral data, catastrophic forgetting would occur during second-phase pre-training. Forgetting information in the original training data may damage the model's downstream performance by a large margin. In this work, we empirically show that catastrophic forgetting occurs in such methods by evaluating them with general NLP tasks in GLUE. Then, we propose a new method, GEnder Equality Prompt (GEEP), to improve gender fairness of pre-trained models with less forgetting. GEEP freezes the pre-trained model and learns gender-related prompts with gender-neutral data. Empirical results show that GEEP not only achieves SOTA performances on gender fairness tasks, but also forgets less and performs better on GLUE by a large margin.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Natural Language Understanding | GLUE | SST-292.4 | 452 | |
| Coreference Resolution | WSC | Accuracy50.5 | 96 | |
| Pronoun Disambiguation | WSC (test) | -- | 14 | |
| Coreference Resolution | Winogender | Accuracy62.9 | 3 | |
| Coreference Resolution | DPR WSCR | Accuracy52.8 | 3 | |
| Pronoun Coreference Resolution | Winogender (test) | Accuracy64.5 | 3 | |
| Pronoun Coreference Resolution | DPR WSCR (test) | Accuracy53.6 | 3 | |
| Natural Language Understanding | GLUE 2018 (test dev) | MNLI87.7 | 3 |