Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

When Weak LLMs Speak with Confidence, Preference Alignment Gets Stronger

About

Preference alignment is an essential step in adapting large language models (LLMs) to human values, but existing approaches typically depend on costly human annotations or large-scale API-based models. We explore whether a weak LLM can instead act as an effective annotator. We surprisingly find that selecting only a subset of a weak LLM's highly confident samples leads to substantially better performance than using full human annotations. Building on this insight, we propose Confidence-Weighted Preference Optimization (CW-PO), a general framework that re-weights training samples by a weak LLM's confidence and can be applied across different preference optimization objectives. Notably, the model aligned by CW-PO with just 20% of human annotations outperforms the model trained with 100% of annotations under standard DPO. These results suggest that weak LLMs, when paired with confidence weighting, can dramatically reduce the cost of preference alignment while even outperforming methods trained on fully human-labeled data.

Amirabbas Afzali, Myeongho Jeon, Maria Brbic• 2026

Related benchmarks

TaskDatasetResultRank
SummarizationTL;DR
Winrate86.5
42
Preference AlignmentHH-RLHF (test)
Win Rate87.4
36
Preference AlignmentTL;DR (test)
Win Rate68.8
36
Preference AlignmentHH-RLHF--
31
Preference AlignmentUFB (test)
Win Rate81.05
18
Preference AlignmentUFB
Win Rate83.2
18
Preference AlignmentHARMLESS
GRA (%)72.9
4
Preference AlignmentHELPFUL
GRA (%)72.7
4
Preference AlignmentTL;DR
GRA (%)64.4
4
Preference AlignmentAVG
GRA70.6
4
Showing 10 of 10 rows

Other info

Follow for update