Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Personalized Pieces: Efficient Personalized Large Language Models through Collaborative Efforts

About

Personalized large language models (LLMs) aim to tailor interactions, content, and recommendations to individual user preferences. While parameter-efficient fine-tuning (PEFT) methods excel in performance and generalization, they are costly and limit communal benefits when used individually. To this end, we introduce Personalized Pieces (Per-Pcs), a framework that allows users to safely share and assemble personalized PEFT efficiently with collaborative efforts. Per-Pcs involves selecting sharers, breaking their PEFT into pieces, and training gates for each piece. These pieces are added to a pool, from which target users can select and assemble personalized PEFT using their history data. This approach preserves privacy and enables fine-grained user modeling without excessive storage and computation demands. Experimental results show Per-Pcs outperforms non-personalized and PEFT retrieval baselines, offering performance comparable to OPPU with significantly lower resource use across six tasks. Further analysis highlights Per-Pcs's robustness concerning sharer count and selection strategy, pieces sharing ratio, and scalability in computation time and storage space. Per-Pcs's modularity promotes safe sharing, making LLM personalization more efficient, effective, and widely accessible through collaborative efforts.

Zhaoxuan Tan, Zheyuan Liu, Meng Jiang• 2024

Related benchmarks

TaskDatasetResultRank
PersonalizationLaMP-1
Accuracy65.6
8
Language Model PersonalizationLaMP few-shot personalization setting
LaMP-1 Accuracy48
8
PersonalizationLaMP-4
ROUGE-119.1
8
Language Model PersonalizationLaMP standard (full-data)
LaMP-1 Score0.698
8
PersonalizationLaMP-2
Acc36.8
8
Showing 5 of 5 rows

Other info

Follow for update