Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MergeBench: A Benchmark for Merging Domain-Specialized LLMs

About

Model merging provides a scalable alternative to multi-task training by combining specialized finetuned models through parameter arithmetic, enabling efficient deployment without the need for joint training or access to all task data. While recent methods have shown promise, existing evaluations are limited in both model scale and task diversity, leaving open questions about their applicability to large, domain-specialized LLMs. To tackle the challenges, we introduce MergeBench, a comprehensive evaluation suite designed to assess model merging at scale. MergeBench builds on state-of-the-art open-source language models, including Llama and Gemma families at 2B to 9B scales, and covers five key domains: instruction following, mathematics, multilingual understanding, coding and safety. We standardize finetuning and evaluation protocols, and assess eight representative merging methods across multi-task performance, forgetting and runtime efficiency. Based on extensive experiments, we provide practical guidelines for algorithm selection and share insights showing that model merging tends to perform better on stronger base models, with techniques such as merging coefficient tuning and sparsification improving knowledge retention. However, several challenges remain, including the computational cost on large models, the gap for in-domain performance compared to multi-task models, and the underexplored role of model merging in standard LLM training pipelines. We hope MergeBench provides a foundation for future research to advance the understanding and practical application of model merging. Our project page is at \href{https://yifei-he.github.io/mergebench/}{https://yifei-he.github.io/mergebench/}.

Yifei He, Siqi Zeng, Yuzheng Hu, Rui Yang, Tong Zhang, Han Zhao• 2025

Related benchmarks

TaskDatasetResultRank
Causal Language ModelingM2D2 (test)
Accuracy (Coding)100
11
ReasoningMathematics
Normalized Score100
9
ReasoningInstruction Following
Normalized Score100
9
ReasoningCoding
Normalized Score100
9
ReasoningMathematics, Multilingual, Coding, and Instruction Following Aggregate
Average Normalized Score82.1
9
ReasoningMultilingual
Normalized Score100
9
Showing 6 of 6 rows

Other info

Follow for update