Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Towards Resource Efficient and Interpretable Bias Mitigation in Large Language Models

About

Although large language models (LLMs) have demonstrated their effectiveness in a wide range of applications, they have also been observed to perpetuate unwanted biases present in the training data, potentially leading to harm for marginalized communities. In this paper, we mitigate bias by leveraging small biased and anti-biased expert models to obtain a debiasing signal that is added to the LLM output at decoding-time. This approach combines computational efficiency - fine-tuning a small model versus re-training a large model and interpretability - one can examine the probability shift from debiasing. The framework can also be tailored to specific contexts by switching the choice of the fine-tuning dataset. Experiments on mitigating gender, race, and religion biases on different architectures show a reduction in bias on several local and global bias metrics while preserving language model performance.

Schrasing Tong, Eliott Zemour, Jessica Lu, Rawisara Lohanimit, Lalana Kagal• 2024

Related benchmarks

TaskDatasetResultRank
Gender bias evaluationRedditBias Gender (test)
Regard3.54
9
Language ModelingRedditBias Gender (test)
LM Score92.84
9
Bias EvaluationRedditBias Religion
Regard7.72
8
Bias MitigationRedditBias Race
Regard1.84
8
Language ModelingRedditBias Religion
LM Score87.49
8
Showing 5 of 5 rows

Other info

Follow for update