Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Context-Aware Counterfactual Data Augmentation for Gender Bias Mitigation in Language Models

About

A challenge in mitigating social bias in fine-tuned language models (LMs) is the potential reduction in language modeling capability, which can harm downstream performance. Counterfactual data augmentation (CDA), a widely used method for fine-tuning, highlights this issue by generating synthetic data that may align poorly with real-world distributions or creating overly simplistic counterfactuals that ignore the social context of altered sensitive attributes (e.g., gender) in the pretraining corpus. To address these limitations, we propose a simple yet effective context-augmented CDA method, Context-CDA, which uses large LMs to enhance the diversity and contextual relevance of the debiasing corpus. By minimizing discrepancies between the debiasing corpus and pretraining data through augmented context, this approach ensures better alignment, enhancing language modeling capability. We then employ uncertainty-based filtering to exclude generated counterfactuals considered low-quality by the target smaller LMs (i.e., LMs to be debiased), further improving the fine-tuning corpus quality. Experimental results on gender bias benchmarks demonstrate that Context-CDA effectively mitigates bias without sacrificing language modeling performance while offering insights into social biases by analyzing distribution shifts in next-token generation probabilities.

Shweta Parihar, Liu Guangliang, Natalie Parde, Lu Cheng• 2026

Related benchmarks

TaskDatasetResultRank
Bias MeasurementStereoSet
Overall SS57.75
25
Bias EvaluationCrowS-Pairs
CS Score50.76
13
Showing 2 of 2 rows

Other info

Follow for update