Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Bi-directional Bias Attribution: Debiasing Large Language Models without Modifying Prompts

About

Large language models (LLMs) have demonstrated impressive capabilities across a wide range of natural language processing tasks. However, their outputs often exhibit social biases, raising fairness concerns. Existing debiasing methods, such as fine-tuning on additional datasets or prompt engineering, face scalability issues or compromise user experience in multi-turn interactions. To address these challenges, we propose a framework for detecting stereotype-inducing words and attributing neuron-level bias in LLMs, without the need for fine-tuning or prompt modification. Our framework first identifies stereotype-inducing adjectives and nouns via comparative analysis across demographic groups. We then attribute biased behavior to specific neurons using two attribution strategies based on integrated gradients. Finally, we mitigate bias by directly intervening on their activations at the projection layer. Experiments on three widely used LLMs demonstrate that our method effectively reduces bias while preserving overall model performance. Code is available at the github link: https://github.com/XMUDeepLIT/Bi-directional-Bias-Attribution.

Yujie Lin, Kunquan Li, Yixuan Liao, Xiaoxin Chen, Jinsong Su• 2026

Related benchmarks

TaskDatasetResultRank
Bias MeasurementStereoSet--
25
Occupation classificationBias-in-Bio lightweight (test)
Overall Accuracy77.32
16
Bias EvaluationBBQ averaged across gender, nationality, and religion domains
Accuracy (Ambiguous)74.46
16
Stereotype Fairness IdentificationWinoBias cloze-style (test)
P_stereo43.18
14
Natural Language InferenceBias-NLI
Pe (Bias-NLI)0.4
8
Stereotype Bias EvaluationStereoSet (test)
Gender SS76.03
8
Gender Bias in Coreference ResolutionWinobias
P(Stereo)50.76
7
Showing 7 of 7 rows

Other info

Follow for update