Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

BiasGuard: A Reasoning-enhanced Bias Detection Tool For Large Language Models

About

Identifying bias in LLM-generated content is a crucial prerequisite for ensuring fairness in LLMs. Existing methods, such as fairness classifiers and LLM-based judges, face limitations related to difficulties in understanding underlying intentions and the lack of criteria for fairness judgment. In this paper, we introduce BiasGuard, a novel bias detection tool that explicitly analyzes inputs and reasons through fairness specifications to provide accurate judgments. BiasGuard is implemented through a two-stage approach: the first stage initializes the model to explicitly reason based on fairness specifications, while the second stage leverages reinforcement learning to enhance its reasoning and judgment capabilities. Our experiments, conducted across five datasets, demonstrate that BiasGuard outperforms existing tools, improving accuracy and reducing over-fairness misjudgments. We also highlight the importance of reasoning-enhanced decision-making and provide evidence for the effectiveness of our two-stage optimization pipeline.

Zhiting Fan, Ruizhe Chen, Zuozhu Liu• 2025

Related benchmarks

TaskDatasetResultRank
Bias DetectionImplicit Toxicity (test)
Accuracy81
12
Bias DetectionThe Gab Hate (GHC) (test)
Accuracy71.25
12
Bias DetectionRedditBias (test)
Acc79.3
12
Bias DetectionToxiGen (test)
Accuracy73.15
12
Bias DetectionSBIC (test)
Accuracy74
12
Showing 5 of 5 rows

Other info

Follow for update