Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Synthetic Interaction Data for Scalable Personalization in Large Language Models

About

Personalized prompting offers large opportunities for deploying large language models (LLMs) to diverse users, yet existing prompt optimization methods primarily focus on task-level optimization while largely overlooking user-specific preferences and latent constraints of individual users. This gap is primarily due to (i) the absence of high-quality, privacy-sensitive data that capture personalized user-LLM interactions at scale, and (ii) the lack of robust reward signals for individual preferences. To overcome existing data limitations, we introduce a high-fidelity synthetic data generation framework called PersonaGym. Unlike prior work that treats personalization as static persona-preference pairs, PersonaGym models a dynamic preference process via an agentic LLM system to simulate realistic preference behaviors and semantic-aware noise in order to generate personalized multi-turn interaction trajectories. Using PersonaGym, we release PersonaAtlas, a large-scale, high-quality, and diverse synthetic dataset of high-fidelity multi-turn personalized interaction trajectories that closely mirror real-world preference expression and noise patterns. We further propose Personalized Prompt Optimization (PPOpt), a scalable and model-agnostic framework that optimizes user prompts based on interaction histories without modifying the deployed LLM. PPOpt adopts a reason-then-optimize paradigm that infers an explicit user profile and conditions prompt rewriting on the user profile to avoid reward hacking. Our training procedure for PPOpt integrates a cold-start supervised prior with outcome-driven multi-objective reinforcement learning. We present extensive experiments to demonstrate consistent improvements over state-of-the-art baselines in terms of task performance, personalization quality, and robustness to noisy as well as to sparse preference signals.

Yuchen Ma, Yue Huang, Wenjie Wang, Xiaonan Luo, Xiangliang Zhang, Stefan Feuerriegel• 2026

Related benchmarks

TaskDatasetResultRank
PersonalizationSynthetic personalized interaction datasets (eval)
Personalization Score7.2
10
Task CompletionSynthetic personalized interaction datasets (evaluation)
Task Completion Score8.48
10
PersonalizationReal-World (test)
Score7.35
6
Personalized InteractionAI2 ARC Synthetic
Personalization Score7.38
6
Personalized InteractionIFEval Synthetic
Personalization Score6.58
6
Personalized InteractionMBPP Synthetic
Personalization Score7.9
6
Personalized Interactionoasst1 Synthetic
Personalization Score7.26
6
Personalized Interactionultrachat Synthetic
Personalization Score7.26
6
Task CompletionReal-World (test)
Score8.08
6
Showing 9 of 9 rows

Other info

Follow for update