Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Multiple-Debias: A Full-process Debiasing Method for Multilingual Pre-trained Language Models

About

Multilingual Pre-trained Language Models (MPLMs) have become essential tools for natural language processing. However, they often exhibit biases related to sensitive attributes such as gender, race, and religion. In this paper, we introduce a comprehensive multilingual debiasing method named Multiple-Debias to address these issues across multiple languages. By incorporating multilingual counterfactual data augmentation and multilingual Self-Debias across both pre-processing and post-processing stages, alongside parameter-efficient fine-tuning, we significantly reduced biases in MPLMs across three sensitive attributes in four languages. We also extended CrowS-Pairs to German, Spanish, Chinese, and Japanese, validating our full-process multilingual debiasing method for gender, racial, and religious bias. Our experiments show that (i) multilingual debiasing methods surpass monolingual approaches in effectively mitigating biases, and (ii) integrating debiasing information from different languages notably improves the fairness of MPLMs.

Haoyu Liang, Peijian Zeng, Wentao Huang, Aimin Yang, Dong Zhou• 2026

Related benchmarks

TaskDatasetResultRank
Gender Bias MitigationMultilingual CrowS-Pairs gender-sensitive attributes
Bias Score (DE)0.83
18
Religious Bias EvaluationMultilingual CrowS-Pairs (test)
Bias Score (DE)4.17
18
Racial Bias EvaluationMultilingual CrowS-Pairs racial bias
Bias Score (DE)11.41
18
Showing 3 of 3 rows

Other info

Follow for update