Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Federated Data-Efficient Instruction Tuning for Large Language Models

About

Instruction tuning is a crucial step in improving the responsiveness of pretrained large language models (LLMs) to human instructions. Federated learning (FL) helps to exploit the use of vast private instruction data from clients, becoming popular for LLM tuning by improving data diversity. Existing federated tuning simply consumes all local data, causing excessive computational overhead and overfitting to local data, while centralized data-efficient solutions are not suitable for FL due to privacy concerns. This work presents FedHDS, a federated data-efficient instruction tuning approach, which tunes LLMs with a representative subset of edge-side data. It reduces the data redundancy at both intra- and inter-client levels without sharing raw data. Experiments with various LLMs, datasets and partitions show that FedHDS improves Rouge-L on unseen tasks by an average of 10.72% over the SOTA full-data federated instruction tuning methods, while using less than 1.5% of the data samples, improving training efficiency by up to tens of times.

Zhen Qin, Zhaomin Wu, Bingsheng He, Shuiguang Deng• 2024

Related benchmarks

TaskDatasetResultRank
Instruction TuningDolly-15K alpha=5.0
Rouge-L35.79
22
Instruction TuningNatural Instructions Meta Non-IID
Rouge-L32.93
22
Instruction TuningDolly-15K alpha=0.5
Rouge-L35.4
22
Federated LearningNatural Instructions (NI)
Speedup48.8
10
Federated LearningDolly-15K
Speedup18.86
10
Showing 5 of 5 rows

Other info

Follow for update