Don't Start Over: A Cost-Effective Framework for Migrating Personalized Prompts Between LLMs
About
Personalization in Large Language Models (LLMs) often relies on user-specific soft prompts. However, these prompts become obsolete when the foundation model is upgraded, necessitating costly, full-scale retraining. To overcome this limitation, we propose the Prompt-level User Migration Adapter (PUMA), a lightweight framework to efficiently migrate personalized prompts across incompatible models. PUMA utilizes a parameter-efficient adapter to bridge the semantic gap, combined with a group-based user selection strategy to significantly reduce training costs. Experiments on three large-scale datasets show our method matches or even surpasses the performance of retraining from scratch, reducing computational cost by up to 98%. The framework demonstrates strong generalization across diverse model architectures and robustness in advanced scenarios like chained and aggregated migrations, offering a practical path for the sustainable evolution of personalized AI by decoupling user assets from the underlying models.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| News Recommendation | MIND (test) | AUC65.46 | 27 | |
| Rating Prediction | Amazon (test) | RMSE0.9135 | 4 | |
| Rating Prediction | Yelp (test) | RMSE1.1073 | 4 | |
| Rating Prediction | AMAZON | Time per Epoch (h)0.16 | 2 |