Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Meet Dynamic Individual Preferences: Resolving Conflicting Human Value with Paired Fine-Tuning

About

Recent advances in large language models (LLMs) have significantly improved the alignment of models with general human preferences. However, a major challenge remains in adapting LLMs to individual preferences, which are not only diverse but also dynamic. In this paper, we introduce a novel framework, Preference-Paired Fine-Tuning (PFT), designed to align models with contradictory and evolving individual preferences. We present a new dataset, Value Conflict Dilemma (VCD), which includes scenarios that involve conflicting human preferences, facilitating the evaluation of our approach. Our experiments demonstrate that PFT outperforms single-preference training methods, achieving up to 96.6% accuracy in multi-choice classification tasks and the highest open-ended generation score of 8.69. PFT also shows significant improvements over DPO, SFT and some traditional training methods, especially when handling conflicting preferences. Additionally, with limited user history data, models can inferring preference vector rapidly, achieving a 44.76% improvement in user-specific preference alignment in comparison to single-preference models.

Shanyong Wang, Shuhang Lin, Yining Zhao, Xi Zhu, Yongfeng Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Open-ended evaluationBQD
p+ Score8.69
39
Preference EvaluationVCD
Multi-choice One Preference p+91.71
39
Multi-choice EvaluationBQD
p+98.67
39
Showing 3 of 3 rows

Other info

Follow for update