Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Many Preferences, Few Policies: Towards Scalable Language Model Personalization

About

The holy grail of LLM personalization is a single LLM for each user, perfectly aligned with that user's preferences. However, maintaining a separate LLM per user is impractical due to constraints on compute, memory, and system complexity. We address this challenge by developing a principled method for selecting a small portfolio of LLMs that captures representative behaviors across heterogeneous users. We model user preferences across multiple traits (e.g., safety, humor, brevity) through a multi-dimensional weight vector. Given reward functions across these dimensions, our algorithm PALM (Portfolio of Aligned LLMs) generates a small portfolio of LLMs such that, for any weight vector, the portfolio contains a near-optimal LLM for the corresponding scalarized objective. To the best of our knowledge, this is the first result that provides theoretical guarantees on both the size and approximation quality of LLM portfolios for personalization. It characterizes the trade-off between system cost and personalization, as well as the diversity of LLMs required to cover the landscape of user preferences. We provide empirical results that validate these guarantees and demonstrate greater output diversity over common baselines.

Cheol Woo Kim, Jai Moondra, Roozbeh Nahavandi, Andrew Perrault, Milind Tambe, Swati Gupta• 2026

Related benchmarks

TaskDatasetResultRank
Helpful Assistants AlignmentHelpful Assistants
Multiplicative Gap (Epsilon)0.0145
15
Multi-objective Reinforcement LearningRLVR-GSM
Multiplicative Gap (ε)0.0112
12
Safety AlignmentSafety Alignment
Multiplicative Gap (Epsilon)0.0131
12
Showing 3 of 3 rows

Other info

Follow for update