Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LM-Cocktail: Resilient Tuning of Language Models via Model Merging

About

The pre-trained language models are continually fine-tuned to better support downstream applications. However, this operation may result in significant performance degeneration on general tasks beyond the targeted domain. To overcome this problem, we propose LM-Cocktail which enables the fine-tuned model to stay resilient in general perspectives. Our method is conducted in the form of model merging, where the fine-tuned language model is merged with the pre-trained base model or the peer models from other domains through weighted average. Despite simplicity, LM-Cocktail is surprisingly effective: the resulted model is able to achieve a strong empirical performance in the whole scope of general tasks while preserving a superior capacity in its targeted domain. We conduct comprehensive experiments with LLama and BGE model on popular benchmarks, including FLAN, MMLU, MTEB, whose results validate the efficacy of our proposed method. The code and checkpoints are available at https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail.

Shitao Xiao, Zheng Liu, Peitian Zhang, Xingrun Xing• 2023

Related benchmarks

TaskDatasetResultRank
Multitask Language UnderstandingMMLU (test)
Accuracy48.21
303
Sentiment AnalysisSST-2 (test)
Accuracy96.56
136
Question AnsweringSQuAD (test)--
111
Topic ClassificationAG News (test)--
98
Question AnsweringNQ (test)--
66
Natural Language InferenceMNLI (test)
Accuracy0.8923
38
Multi-task GeneralizationOther tasks (test)
Score60.28
36
Commonsense GenerationCommonGen (test)--
31
Information RetrievalTREC-COVID--
30
Commonsense ReasoningHELLASWAG (test)
Accuracy79
21
Showing 10 of 26 rows

Other info

Code

Follow for update