Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Optimal Transport-Based Token Weighting scheme for Enhanced Preference Optimization

About

Direct Preference Optimization (DPO) has emerged as a promising framework for aligning Large Language Models (LLMs) with human preferences by directly optimizing the log-likelihood difference between chosen and rejected responses. However, existing methods assign equal importance to all tokens in the response, while humans focus on more meaningful parts. This leads to suboptimal preference optimization, as irrelevant or noisy tokens disproportionately influence DPO loss. To address this limitation, we propose \textbf{O}ptimal \textbf{T}ransport-based token weighting scheme for enhancing direct \textbf{P}reference \textbf{O}ptimization (OTPO). By emphasizing semantically meaningful token pairs and de-emphasizing less relevant ones, our method introduces a context-aware token weighting scheme that yields a more contrastive reward difference estimate. This adaptive weighting enhances reward stability, improves interpretability, and ensures that preference optimization focuses on meaningful differences between responses. Extensive experiments have validated OTPO's effectiveness in improving instruction-following ability across various settings\footnote{Code is available at https://github.com/Mimasss2/OTPO.}.

Meng Li, Guangda Huzhang, Haibo Zhang, Xiting Wang, Anxiang Zeng• 2025

Related benchmarks

TaskDatasetResultRank
LLM Alignment EvaluationAlpacaEval 2.0 (test)
LC Win Rate30.35
51
Instruction FollowingAlpacaEval UltraFeedback 2 (test)
LC Win Rate53.37
12
Instruction FollowingAlpacaEval Helpsteer2 2 (test)
LC Win Rate29.64
12
Human EvaluationUltraFeedback 50 sampled questions
Win Rate (Expert 1)62
5
Showing 4 of 4 rows

Other info

Code

Follow for update