Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Fusing finetuned models for better pretraining

About

Pretrained models are the standard starting point for training. This approach consistently outperforms the use of a random initialization. However, pretraining is a costly endeavour that few can undertake. In this paper, we create better base models at hardly any cost, by fusing multiple existing fine tuned models into one. Specifically, we fuse by averaging the weights of these models. We show that the fused model results surpass the pretrained model ones. We also show that fusing is often better than intertraining. We find that fusing is less dependent on the target task. Furthermore, weight decay nullifies intertraining effects but not those of fusing.

Leshem Choshen, Elad Venezian, Noam Slonim, Yoav Katz• 2022

Related benchmarks

TaskDatasetResultRank
Graph ClassificationMUTAG
Accuracy37.4
862
Domain GeneralizationDomainBed (out-of-domain)
VLCS Accuracy78.5
38
Node ClassificationTwitch
Accuracy55.8
30
Graph ClassificationPTC
Accuracy50.2
14
Node ClassificationOGB-Arxiv 2018-2020 1.0 (test)
Accuracy25.63
14
Showing 5 of 5 rows

Other info

Follow for update