Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Dual-Personalizing Adapter for Federated Foundation Models

About

Recently, foundation models, particularly large language models (LLMs), have demonstrated an impressive ability to adapt to various tasks by fine-tuning diverse instruction data. Notably, federated foundation models (FedFM) emerge as a privacy preservation method to fine-tune models collaboratively under federated learning (FL) settings by leveraging many distributed datasets with non-IID data. To alleviate communication and computation overhead, parameter-efficient methods are introduced for efficiency, and some research adapted personalization methods to FedFM for better user preferences alignment. However, a critical gap in existing research is the neglect of test-time distribution shifts in real-world applications, and conventional methods for test-time distribution shifts in personalized FL are less effective for FedFM due to their failure to adapt to complex distribution shift scenarios and the requirement to train all parameters. To bridge this gap, we refine the setting in FedFM, termed test-time personalization, which aims to learn personalized federated foundation models on clients while effectively handling test-time distribution shifts simultaneously. To address challenges in this setting, we explore a simple yet effective solution, a Federated Dual-Personalizing Adapter (FedDPA) architecture. By co-working with a foundation model, a global adapter and a local adapter jointly tackle the test-time distribution shifts and client-specific personalization. Additionally, we introduce an instance-wise dynamic weighting mechanism that dynamically integrates the global and local adapters for each test instance during inference, facilitating effective test-time personalization. The effectiveness of the proposed method has been evaluated on benchmark datasets across different NLP tasks.

Yiyuan Yang, Guodong Long, Tao Shen, Jing Jiang, Michael Blumenstein• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationDomainNet
Accuracy (ClipArt)87.9
206
Instruction FollowingNatural Instructions (test)
Rouge-L95
90
Commonsense ReasoningCommonsense Reasoning Suite (test)
HellaSwag Accuracy0.6573
62
Personalized Federated LearningDRAKE dynamic (Self)
Alast67.38
40
Personalized Federated LearningDRAKE dynamic (Others)
Alast48.4
40
Image ClassificationDomainNet (unseen clients)
Average Accuracy83.5
34
Personalized Federated LearningDRAKE (Self)
Alast66.09
30
Natural Language UnderstandingGLUE (Self)
Alast61.47
20
Personalized Federated LearningDRAKE static (Others)
Alast48.18
20
Natural Language UnderstandingGLUE (Others)
Alast Score25.78
20
Showing 10 of 49 rows

Other info

Code

Follow for update