Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

One Adapts to Any: Meta Reward Modeling for Personalized LLM Alignment

About

Alignment of Large Language Models (LLMs) aims to align outputs with human preferences, and personalized alignment further adapts models to individual users. This relies on personalized reward models that capture user-specific preferences and automatically provide individualized feedback. However, developing these models faces two critical challenges: the scarcity of feedback from individual users and the need for efficient adaptation to unseen users. We argue that addressing these constraints requires a paradigm shift from fitting data to learn user preferences to learn the process of preference adaptation. To realize this, we propose Meta Reward Modeling (MRM), which reformulates personalized reward modeling as a meta-learning problem. Specifically, we represent each user's reward model as a weighted combination of base reward functions, and optimize the initialization of these weights using a Model-Agnostic Meta-Learning (MAML)-style framework to support fast adaptation under limited feedback. To ensure robustness, we introduce the Robust Personalization Objective (RPO), which places greater emphasis on hard-to-learn users during meta optimization. Extensive experiments on personalized preference datasets validate that MRM enhances few-shot personalization, improves user robustness, and consistently outperforms baselines.

Hongru Cai, Yongqi Li, Tiezheng Yu, Fengbin Zhu, Wenjie Wang, Fuli Feng, Wenjie Li• 2026

Related benchmarks

TaskDatasetResultRank
Personalized Reward ModelingPRISM Seen
User-level Accuracy65.3
11
Personalized Reward ModelingPRISM Unseen
User-level Accuracy0.652
11
Personalized Reward ModelingPRISM Overall
User-level Accuracy65.3
11
Personalized Reward ModelingReddit TLDR 100 examples Seen
User-level Accuracy69.6
11
Personalized Reward ModelingReddit TLDR 100 examples Unseen
User-level Accuracy69.6
11
Personalized Reward ModelingReddit TLDR 100 examples Overall
User-level Accuracy69.6
11
Personalized Reward ModelingReddit TLDR 150 examples Seen
User-level Accuracy69.7
11
Personalized Reward ModelingReddit TLDR 150 examples Unseen
User-level Accuracy69.8
11
Personalized Reward ModelingReddit TLDR 150 examples Overall
User-level Accuracy69.7
11
Showing 9 of 9 rows

Other info

GitHub

Follow for update