Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Statistical Framework for Alignment with Biased AI Feedback

About

Modern alignment pipelines are increasingly replacing expensive human preference labels with evaluations from large language models (LLM-as-Judge). However, AI labels can be systematically biased compared to high-quality human feedback datasets. In this paper, we develop two debiased alignment methods within a general framework that accommodates heterogeneous prompt-response distributions and external human feedback sources. Debiased Direct Preference Optimization (DDPO) augments standard DPO with a residual-based correction and density-ratio reweighting to mitigate systematic bias, while retaining DPO's computational efficiency. Debiased Identity Preference Optimization (DIPO) directly estimates human preference probabilities without imposing a parametric reward model. We provide theoretical guarantees for both methods: DDPO offers a practical and computationally efficient solution for large-scale alignment, whereas DIPO serves as a robust, statistically optimal alternative that attains the semiparametric efficiency bound. Empirical studies on sentiment generation, summarization, and single-turn dialogue demonstrate that the proposed methods substantially improve alignment efficiency and recover performance close to that of an oracle trained on fully human-labeled data.

Xintao Xia, Zhiqiu Xia, Linjun Zhang, Zhanrui Cai• 2026

Related benchmarks

TaskDatasetResultRank
Dialogue GenerationAnthropic HH (test)
Average Preference Score68.72
16
Single-turn DialogueAnthropic-HH
Win Rate61.34
12
SummarizationSummarization benchmark dataset
Win Rate77.55
12
Sentiment GenerationIMDB
Average Score0.9748
10
SummarizationBenchmark dataset
Win Rate78.82
8
Showing 5 of 5 rows

Other info

Follow for update