Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Communication-Efficient Multimodal Federated Learning: Joint Modality and Client Selection

About

Multimodal federated learning (MFL) aims to enrich model training in FL settings where clients are collecting measurements across multiple modalities. However, key challenges to MFL remain unaddressed, particularly in heterogeneous network settings where: (i) the set of modalities collected by each client is diverse, and (ii) communication limitations prevent clients from uploading all their locally trained modality encoders to the server. In this paper, we propose Multimodal Federated learning with joint Modality and Client selection (MFedMC), a communication-efficient MFL framework that tackles these challenges through a decoupled architecture and selective uploading. Unlike traditional holistic fusion approaches, MFedMC separates modality encoders and fusion modules: modality encoders are aggregated at the server for generalization across diverse client distributions, while fusion modules remain local to each client for personalized adaptation to individual modality configurations and data characteristics. Building on this decoupled design, our joint selection algorithm incorporates two main components: (a) A modality selection methodology for each client, which weighs (i) the impact of the modality, gauged by Shapley value analysis, (ii) the modality encoder size as a gauge of communication overhead, and (iii) the frequency of modality encoder updates, denoted recency, to enhance generalizability. (b) A client selection strategy for the server based on the local loss of modality encoders at each client. Experiments on five real-world datasets demonstrate that MFedMC achieves comparable accuracy to several baselines while reducing communication overhead by over 20$\times$. A demo video and our code are available at https://liangqiy.com/mfedmc/.

Liangqi Yuan, Dong-Jun Han, Su Wang, Devesh Upadhyay, Christopher G. Brinton• 2024

Related benchmarks

TaskDatasetResultRank
Deepfake DetectionDFC IID Setting 23
Accuracy67.61
18
Action RecognitionActionSense IID Setting
Accuracy92.28
9
Action RecognitionActionSense Natural Distribution
Accuracy98.87
9
Electrocardiography ClassificationPTB-XL IID Setting
Accuracy87.04
9
Electrocardiography ClassificationPTB-XL Natural Distribution
Accuracy55.09
9
Human Activity RecognitionUCI-HAR IID Setting
Accuracy78.1
9
Human Activity RecognitionUCI-HAR Natural Distribution
Accuracy (%)71.28
9
Multimodal Emotion RecognitionMELD IID Setting
Accuracy55.82
9
Multimodal Emotion RecognitionMELD Natural Distribution
Accuracy57.43
9
Multimodal Federated LearningActionSense 100 communication rounds
Training Time (mins)19.13
9
Showing 10 of 10 rows

Other info

Follow for update