Mitigating Selection Bias in Large Language Models via Permutation-Aware GRPO
About
Large language models (LLMs) used for multiple-choice and pairwise evaluation tasks often exhibit selection bias due to non-semantic factors like option positions and label symbols. Existing inference-time debiasing is costly and may harm reasoning, while pointwise training ignores that the same question should yield consistent answers across permutations. To address this issue, we propose Permutation-Aware Group Relative Policy Optimization (PA-GRPO), which mitigates selection bias by enforcing permutation-consistent semantic reasoning. PA-GRPO constructs a permutation group for each instance by generating multiple candidate permutations, and optimizes the model using two complementary mechanisms: (1) cross-permutation advantage, which computes advantages relative to the mean reward over all permutations of the same instance, and (2) consistency-aware reward, which encourages the model to produce consistent decisions across different permutations. Experimental results demonstrate that PA-GRPO outperforms strong baselines across seven benchmarks, substantially reducing selection bias while maintaining high overall performance. The code will be made available on Github (https://github.com/ECNU-Text-Computing/PA-GRPO).
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| LLM-as-a-Judge | JudgeBench | Accuracy60.1 | 29 | |
| Multiple-Choice Questions | ARC Challenge | Accuracy96 | 24 | |
| Multiple-Choice Questions | GPQA | Accuracy54.1 | 24 | |
| LLM-as-a-Judge | MT-Bench | Accuracy81.4 | 21 | |
| LLM-as-a-Judge | PreferenceBench | Accuracy90.2 | 21 | |
| Multiple-Choice Questions | TinyMMLU | Accuracy86.8 | 21 |