Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Dual-Personalizing Adapter for Federated Foundation Models

About

Recently, foundation models, particularly large language models (LLMs), have demonstrated an impressive ability to adapt to various tasks by fine-tuning diverse instruction data. Notably, federated foundation models (FedFM) emerge as a privacy preservation method to fine-tune models collaboratively under federated learning (FL) settings by leveraging many distributed datasets with non-IID data. To alleviate communication and computation overhead, parameter-efficient methods are introduced for efficiency, and some research adapted personalization methods to FedFM for better user preferences alignment. However, a critical gap in existing research is the neglect of test-time distribution shifts in real-world applications, and conventional methods for test-time distribution shifts in personalized FL are less effective for FedFM due to their failure to adapt to complex distribution shift scenarios and the requirement to train all parameters. To bridge this gap, we refine the setting in FedFM, termed test-time personalization, which aims to learn personalized federated foundation models on clients while effectively handling test-time distribution shifts simultaneously. To address challenges in this setting, we explore a simple yet effective solution, a Federated Dual-Personalizing Adapter (FedDPA) architecture. By co-working with a foundation model, a global adapter and a local adapter jointly tackle the test-time distribution shifts and client-specific personalization. Additionally, we introduce an instance-wise dynamic weighting mechanism that dynamically integrates the global and local adapters for each test instance during inference, facilitating effective test-time personalization. The effectiveness of the proposed method has been evaluated on benchmark datasets across different NLP tasks.

Yiyuan Yang, Guodong Long, Tao Shen, Jing Jiang, Michael Blumenstein• 2024

Related benchmarks

TaskDatasetResultRank
Instruction FollowingNatural Instructions (test)
Rouge-L95
90
Commonsense ReasoningCommonsense Reasoning Suite (test)
Avg Accuracy0.7233
22
Image ClassificationCIFAR-100 20 clients (personalized)
Client 1 Accuracy67.41
7
Image ClassificationCIFAR-100 personalized (test)
Client 1 Accuracy62.8
7
Natural Language ProcessingFLAN 8-task subset: arc_challenge, cosmos_qa, definite_pronoun_resolution, glue_qqp, hellaswag, mnli, squad_v1, sst2
Closed-book QA70.03
7
Natural Language ProcessingFederated Dataset 1 (Personalization)
Paraphrasing Score0.805
6
Natural Language ProcessingFederated Dataset Personalization 2
Paraphrasing Accuracy90.5
6
Natural Language ProcessingFederated Dataset 1 Test-Time Personalization
Paraphrase Accuracy78.1
4
Natural Language ProcessingFederated Dataset Test-Time Personalization 2
Paraphrasing71.64
4
Open-domain QAFederated Dataset 1 unseen tasks (test)
AVG Score78.76
4
Showing 10 of 12 rows

Other info

Code

Follow for update