Dataless Knowledge Fusion by Merging Weights of Language Models
About
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models. Oftentimes fine-tuned models are readily available but their training data is not, due to data privacy or intellectual property concerns. This creates a barrier to fusing knowledge across individual models to yield a better single model. In this paper, we study the problem of merging individual models built on different training data sets to obtain a single model that performs well both across all data set domains and can generalize on out-of-domain data. We propose a dataless knowledge fusion method that merges models in their parameter space, guided by weights that minimize prediction differences between the merged model and the individual models. Over a battery of evaluation settings, we show that the proposed method significantly outperforms baselines such as Fisher-weighted averaging or model ensembling. Further, we find that our method is a promising alternative to multi-task learning that can preserve or sometimes improve over the individual models without access to the training data. Finally, model merging is more efficient than training a multi-task model, thus making it applicable to a wider set of scenarios.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | CIFAR-100 | -- | 622 | |
| Image Classification | EuroSAT | Accuracy78.6 | 497 | |
| Image Classification | Food-101 | Accuracy76.14 | 494 | |
| Image Classification | DTD | Accuracy30.53 | 487 | |
| Image Classification | Stanford Cars | Accuracy70.8 | 477 | |
| Natural Language Understanding | GLUE | SST-290.6 | 452 | |
| Image Classification | SUN397 | Accuracy58.58 | 425 | |
| Image Classification | DTD | Accuracy52 | 419 | |
| Image Classification | MNIST | Accuracy90.71 | 395 | |
| Natural Language Inference | RTE | Accuracy81.2 | 367 |