Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Federated Co-tuning Framework for Large and Small Language Models

About

By adapting Large Language Models (LLMs) to domain-specific tasks or enriching them with domain-specific knowledge, we can fully harness the capabilities of LLMs. Nonetheless, a gap persists in achieving simultaneous mutual enhancement between the server's LLM and the downstream clients' Small Language Models (SLMs). To address this, we propose FedCoLLM, a novel and parameter-efficient federated framework designed for co-tuning LLMs and SLMs. This approach is aimed at adaptively transferring server-side LLMs knowledge to clients' SLMs while simultaneously enriching the LLMs with domain insights from the clients. To accomplish this, FedCoLLM utilizes lightweight adapters in conjunction with SLMs, facilitating knowledge exchange between server and clients in a manner that respects data privacy while also minimizing computational and communication overhead. Our evaluation of FedCoLLM, utilizing various public LLMs and SLMs across a range of NLP text generation tasks, reveals that the performance of clients' SLMs experiences notable improvements with the assistance of the LLMs. Simultaneously, the LLMs enhanced via FedCoLLM achieves comparable performance to that obtained through direct fine-tuning on clients' data. Our code has been contributed to the FATE open-source project and is now publicly accessible at https://github.com/FederatedAI/FATE-LLM/tree/main/python/fate_llm/algo/fedcollm.

Tao Fan, Yan Kang, Guoqiang Ma, Lixin Fan, Shuoling Liu, Kai Chen, Qiang Yang• 2024

Related benchmarks

TaskDatasetResultRank
Question AnsweringARC Challenge
Accuracy45.4
749
Question AnsweringARC Easy
Accuracy75.2
386
Question AnsweringOBQA
Accuracy37.8
276
Question AnsweringCQA
Accuracy68.1
25
Question AnsweringCQA
Accuracy (GPT-2-Small)43
4
Question AnsweringOBQA
Accuracy (GPT-2-Small)17.8
4
Question AnsweringARC-C
Accuracy (GPT-2-Small)21.8
4
Question AnsweringARC-E
Accuracy (GPT-2-Small)46.8
4
Showing 8 of 8 rows

Other info

Follow for update