Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

An Empirical Study of Multimodal Model Merging

About

Model merging (e.g., via interpolation or task arithmetic) fuses multiple models trained on different tasks to generate a multi-task solution. The technique has been proven successful in previous studies, where the models are trained on similar tasks and with the same initialization. In this paper, we expand on this concept to a multimodal setup by merging transformers trained on different modalities. Furthermore, we conduct our study for a novel goal where we can merge vision, language, and cross-modal transformers of a modality-specific architecture to create a parameter-efficient modality-agnostic architecture. Through comprehensive experiments, we systematically investigate the key factors impacting model performance after merging, including initialization, merging mechanisms, and model architectures. We also propose two metrics that assess the distance between weights to be merged and can serve as an indicator of the merging outcomes. Our analysis leads to an effective training recipe for matching the performance of the modality-agnostic baseline (i.e., pre-trained from scratch) via model merging. Our method also outperforms naive merging significantly on various tasks, with improvements of 3% on VQA, 7% on COCO retrieval, 25% on NLVR2, 14% on Flickr30k and 3% on ADE20k. Our code is available at https://github.com/ylsung/vl-merging

Yi-Lin Sung, Linjie Li, Kevin Lin, Zhe Gan, Mohit Bansal, Lijuan Wang• 2023

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringTextVQA
Accuracy59.4
1117
Visual Question AnsweringGQA
Accuracy58.6
963
Object Hallucination EvaluationPOPE--
935
Science Question AnsweringScienceQA IMG
Accuracy70
256
Visual Question AnsweringVQAv2
Accuracy79.5
177
Multimodal BenchmarkMMBench (MMB)
Accuracy66.5
70
Text-based Visual Question AnsweringTextVQA 52
Accuracy59.4
23
Science Question AnsweringScienceQA IMG 38
Accuracy79.5
21
Multimodal BenchmarkingMM-Bench 37
Accuracy66.5
19
Object Hallucination EvaluationPOPE 28
Accuracy88
18
Showing 10 of 16 rows

Other info

Follow for update