Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Whose Boat Does it Float? Improving Personalization in Preference Tuning via Inferred User Personas

About

LLMs are aligned to follow input instructions by learning which of two responses users prefer for a prompt. However, such preference data do not convey why users prefer responses that are chosen or rejected, so LLMs trained on these datasets cannot tailor responses to varied user needs. To surface these parameters of personalization, we apply abductive reasoning to preference data, inferring needs and interests of users, i.e., personas, that may prefer either response. We test this idea in two steps: Persona Inference (PI), abductively inferring personas of users who prefer chosen or rejected outputs, and Persona Tailoring (PT), training models to tailor outputs to personas from PI. We show: 1) LLMs infer personas accurately explaining why different users may prefer both chosen or rejected outputs; 2) Training on preference data augmented with PI personas via PT boosts personalization and generalizes to supporting user-written personas; and 3) Rejected response personas form harder personalization evaluations, showing PT better aids users with uncommon preferences versus typical alignment methods. We argue for an abductive view of preferences for personalization, asking not only which response is better but when, why, and for whom.

Nishant Balepur, Vishakh Padmakumar, Fumeng Yang, Shi Feng, Rachel Rudinger, Jordan Lee Boyd-Graber• 2025

Related benchmarks

TaskDatasetResultRank
LLM-as-a-JudgePRISM
Accuracy57.48
20
LLM-as-a-JudgeARENA
Accuracy63.97
20
LLM PersonalizationBeaverTails (test)
Personalization Win Rate72.1
6
LLM PersonalizationAnthropic HHH (test)
Personalization Win56.6
6
LLM PersonalizationMnemonic (test)
Personalization Win Rate0.644
3
Showing 5 of 5 rows

Other info

Code

Follow for update