Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

BiasEdit: Debiasing Stereotyped Language Models via Model Editing

About

Previous studies have established that language models manifest stereotyped biases. Existing debiasing strategies, such as retraining a model with counterfactual data, representation projection, and prompting often fail to efficiently eliminate bias or directly alter the models' biased internal representations. To address these issues, we propose BiasEdit, an efficient model editing method to remove stereotypical bias from language models through lightweight networks that act as editors to generate parameter updates. BiasEdit employs a debiasing loss guiding editor networks to conduct local edits on partial parameters of a language model for debiasing while preserving the language modeling abilities during editing through a retention loss. Experiments on StereoSet and Crows-Pairs demonstrate the effectiveness, efficiency, and robustness of BiasEdit in eliminating bias compared to tangental debiasing baselines and little to no impact on the language models' general capabilities. In addition, we conduct bias tracing to probe bias in various modules and explore bias editing impacts on different components of language models.

Xin Xu, Wei Xu, Ningyu Zhang, Julian McAuley• 2025

Related benchmarks

TaskDatasetResultRank
Question AnsweringARC Challenge
Accuracy59.3
749
Question AnsweringARC Easy
Normalized Acc85.85
385
Question AnsweringOBQA
Accuracy70
276
Question AnsweringCOPA
Accuracy71.18
59
Showing 4 of 4 rows

Other info

Follow for update