Synthetic Interaction Data for Scalable Personalization in Large Language Models
About
Personalized prompting offers large opportunities for deploying large language models (LLMs) to diverse users, yet existing prompt optimization methods primarily focus on task-level optimization while largely overlooking user-specific preferences and latent constraints of individual users. This gap is primarily due to (i) the absence of high-quality, privacy-sensitive data that capture personalized user-LLM interactions at scale, and (ii) the lack of robust reward signals for individual preferences. To overcome existing data limitations, we introduce a high-fidelity synthetic data generation framework called PersonaGym. Unlike prior work that treats personalization as static persona-preference pairs, PersonaGym models a dynamic preference process via an agentic LLM system to simulate realistic preference behaviors and semantic-aware noise in order to generate personalized multi-turn interaction trajectories. Using PersonaGym, we release PersonaAtlas, a large-scale, high-quality, and diverse synthetic dataset of high-fidelity multi-turn personalized interaction trajectories that closely mirror real-world preference expression and noise patterns. We further propose Personalized Prompt Optimization (PPOpt), a scalable and model-agnostic framework that optimizes user prompts based on interaction histories without modifying the deployed LLM. PPOpt adopts a reason-then-optimize paradigm that infers an explicit user profile and conditions prompt rewriting on the user profile to avoid reward hacking. Our training procedure for PPOpt integrates a cold-start supervised prior with outcome-driven multi-objective reinforcement learning. We present extensive experiments to demonstrate consistent improvements over state-of-the-art baselines in terms of task performance, personalization quality, and robustness to noisy as well as to sparse preference signals.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Personalization | Synthetic personalized interaction datasets (eval) | Personalization Score7.2 | 10 | |
| Task Completion | Synthetic personalized interaction datasets (evaluation) | Task Completion Score8.48 | 10 | |
| Personalization | Real-World (test) | Score7.35 | 6 | |
| Personalized Interaction | AI2 ARC Synthetic | Personalization Score7.38 | 6 | |
| Personalized Interaction | IFEval Synthetic | Personalization Score6.58 | 6 | |
| Personalized Interaction | MBPP Synthetic | Personalization Score7.9 | 6 | |
| Personalized Interaction | oasst1 Synthetic | Personalization Score7.26 | 6 | |
| Personalized Interaction | ultrachat Synthetic | Personalization Score7.26 | 6 | |
| Task Completion | Real-World (test) | Score8.08 | 6 |