Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PRISP: Privacy-Safe Few-Shot Personalization via Lightweight Adaptation

About

Large language model (LLM) personalization aims to adapt general-purpose models to individual users. Most existing methods, however, are developed under data-rich and resource-abundant settings, often incurring privacy risks. In contrast, realistic personalization typically occurs after deployment under (i) extremely limited user data, (ii) constrained computational resources, and (iii) strict privacy requirements. We propose PRISP, a lightweight and privacy-safe personalization framework tailored to these constraints. PRISP leverages a Text-to-LoRA hypernetwork to generate task-aware LoRA parameters from task descriptions, and enables efficient user personalization by optimizing a small subset of task-aware LoRA parameters together with minimal additional modules using few-shot user data. Experiments on a few-shot variant of the LaMP benchmark demonstrate that PRISP achieves strong overall performance compared to prior approaches, while reducing computational overhead and eliminating privacy risks.

Junho Park, Dohoon Kim, Taesup Moon• 2026

Related benchmarks

TaskDatasetResultRank
Language Model PersonalizationLaMP few-shot personalization setting
LaMP-1 Accuracy52
8
PersonalizationLaMP-2
Acc67.9
8
PersonalizationLaMP-4
ROUGE-120.4
8
Language Model PersonalizationLaMP standard (full-data)
LaMP-1 Score0.704
8
PersonalizationLaMP-1
Accuracy64
8
Showing 5 of 5 rows

Other info

Follow for update