Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RobustDebias: Debiasing Language Models using Distributionally Robust Optimization

About

Pretrained language models have been shown to exhibit biases and social stereotypes. Prior work on debiasing these models has largely focused on modifying embedding spaces during pretraining, which is not scalable for large models. Fine-tuning pretrained models on task-specific datasets can both degrade model performance and amplify biases present in the fine-tuning data. We address bias amplification during fine-tuning rather than costly pretraining, focusing on BERT models due to their widespread use in language understanding tasks. While Empirical Risk Minimization effectively optimizes downstream performance, it often amplifies social biases during fine-tuning. To counter this, we propose \textit{RobustDebias}, a novel mechanism which adapts Distributionally Robust Optimization (DRO) to debias language models during fine-tuning. Our approach debiases models across multiple demographics during MLM fine-tuning and generalizes to any dataset or task. Extensive experiments on various language models show significant bias mitigation with minimal performance impact.

Deep Gandhi, Katyani Singh, Nidhi Hegde• 2026

Related benchmarks

TaskDatasetResultRank
Bias EvaluationCrowS-Pairs--
13
Language Model DebiasingStereoSet (test)
LMS Score0.8535
5
Language Model DebiasingSEAT
Gender Bias Score0.25
5
Showing 3 of 3 rows

Other info

Follow for update