Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Token-Importance Guided Direct Preference Optimization

About

Aligning Large Language Models (LLMs) with human preferences is crucial for safe and effective AI interactions. While popular methods like Direct Preference Optimization (DPO) have simplified alignment, they remain sensitive to data noise and overlook the differential importance of individual tokens. Existing token-level approaches often rely on probability prediction or simplistic weighting schemes to obtain token importance, which still cannot fully address these issues. To solve this problem, we propose the Token-Importance Guided Direct Preference Optimization (TI-DPO), a framework that achieves fine-grained semantic control through two synergistic innovations. First, we propose a novel hybrid weighting mechanism that combines gradient attribution with a Gaussian prior, ensuring both the accuracy and robustness of token importance scores. Second, we employ a triplet loss to provide structured guidance for the optimization, explicitly guiding model outputs to approach preferred responses and diverge from non-preferred ones. Experimental results show that TI-DPO achieves higher accuracy and stronger generative diversity, providing more stable and computationally efficient solutions compared with DPO and other RLHF methods.

Ning Yang, Hai Lin, Yibo Liu, Baoliang Tian, Guoqing Liu, Haijun Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval--
1036
Instruction FollowingIFEval
IFEval Accuracy86
625
Mathematical ReasoningGSM8K
Math Score81
197
Graduate-level Question AnsweringGPQA
Accuracy34.5
184
Code GenerationHumanEval
Pass@168
171
Multi-task Language UnderstandingMMLU
MMLU Score74
112
Multi-task Language UnderstandingMMLU
Accuracy68
111
TruthfulnessTruthfulQA
Truthfulness Accuracy57
86
Question AnsweringTruthfulQA
TruthfulQA Score63
61
Large Language Model EvaluationMMLU, GSM8K, GPQA, HUMANEVAL, TRUTHFULQA, IFEVAL
MMLU70
23
Showing 10 of 14 rows

Other info

Follow for update