Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Don't Start Over: A Cost-Effective Framework for Migrating Personalized Prompts Between LLMs

About

Personalization in Large Language Models (LLMs) often relies on user-specific soft prompts. However, these prompts become obsolete when the foundation model is upgraded, necessitating costly, full-scale retraining. To overcome this limitation, we propose the Prompt-level User Migration Adapter (PUMA), a lightweight framework to efficiently migrate personalized prompts across incompatible models. PUMA utilizes a parameter-efficient adapter to bridge the semantic gap, combined with a group-based user selection strategy to significantly reduce training costs. Experiments on three large-scale datasets show our method matches or even surpasses the performance of retraining from scratch, reducing computational cost by up to 98%. The framework demonstrates strong generalization across diverse model architectures and robustness in advanced scenarios like chained and aggregated migrations, offering a practical path for the sustainable evolution of personalized AI by decoupling user assets from the underlying models.

Ziyi Zhao, Chongming Gao, Yang Zhang, Haoyan Liu, Weinan Gan, Huifeng Guo, Yong Liu, Fuli Feng• 2026

Related benchmarks

TaskDatasetResultRank
News RecommendationMIND (test)
AUC65.46
27
Rating PredictionAmazon (test)
RMSE0.9135
4
Rating PredictionYelp (test)
RMSE1.1073
4
Rating PredictionAMAZON
Time per Epoch (h)0.16
2
Showing 4 of 4 rows

Other info

Follow for update