Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LoRe: Personalizing LLMs via Low-Rank Reward Modeling

About

Personalizing large language models (LLMs) to accommodate diverse user preferences is essential for enhancing alignment and user satisfaction. Traditional reinforcement learning from human feedback (RLHF) approaches often rely on monolithic value representations, limiting their ability to adapt to individual preferences. We introduce a novel framework that leverages low-rank preference modeling to efficiently learn and generalize user-specific reward functions. By representing reward functions in a low-dimensional subspace and modeling individual preferences as weighted combinations of shared basis functions, our approach avoids rigid user categorization while enabling scalability and few-shot adaptation. We validate our method on multiple preference datasets, demonstrating superior generalization to unseen users and improved accuracy in preference prediction tasks.

Avinandan Bose, Zhihan Xiong, Yuejie Chi, Simon Shaolei Du, Lin Xiao, Maryam Fazel• 2025

Related benchmarks

TaskDatasetResultRank
Personalized Reward ModelingReddit TLDR 100 examples Unseen
User-level Accuracy68.6
11
Personalized Reward ModelingReddit TLDR 100 examples Overall
User-level Accuracy68.3
11
Personalized Reward ModelingReddit TLDR 150 examples Unseen
User-level Accuracy68.8
11
Personalized Reward ModelingReddit TLDR 100 examples Seen
User-level Accuracy68.1
11
Personalized Reward ModelingReddit TLDR 150 examples Seen
User-level Accuracy68.5
11
Personalized Reward ModelingReddit TLDR 150 examples Overall
User-level Accuracy68.6
11
Personalized Reward ModelingPRISM Seen
User-level Accuracy63
11
Personalized Reward ModelingPRISM Unseen
User-level Accuracy0.631
11
Personalized Reward ModelingPRISM Overall
User-level Accuracy63
11
Showing 9 of 9 rows

Other info

Follow for update