Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Can MLLMs Absorb Math Reasoning Abilities from LLMs as Free Lunch?

About

Math reasoning has been one crucial ability of large language models (LLMs), where significant advancements have been achieved in recent years. However, most efforts focus on LLMs by curating high-quality annotation data and intricate training (or inference) paradigms, while the math reasoning performance of multi-modal LLMs (MLLMs) remains lagging behind. Since the MLLM typically consists of an LLM and a vision block, we wonder: Can MLLMs directly absorb math reasoning abilities from off-the-shelf math LLMs without tuning? Recent model-merging approaches may offer insights into this question. However, they overlook the alignment between the MLLM and LLM, where we find that there is a large gap between their parameter spaces, resulting in lower performance. Our empirical evidence reveals two key factors behind this issue: the identification of crucial reasoning-associated layers in the model and the mitigation of the gaps in parameter space. Based on the empirical insights, we propose IP-Merging that first identifies the reasoning-associated parameters in both MLLM and Math LLM, then projects them into the subspace of MLLM, aiming to maintain the alignment, and finally merges parameters in this subspace. IP-Merging is a tuning-free approach since parameters are directly adjusted. Extensive experiments demonstrate that our IP-Merging method can enhance the math reasoning ability of MLLMs directly from Math LLMs without compromising their other capabilities.

Yijie Hu, Zihao Zhou, Kaizhu Huang, Xiaowei Huang, Qiufeng Wang• 2025

Related benchmarks

TaskDatasetResultRank
Vision-Language ReasoningVL Reasoning Benchmarks
MVista Score73.4
28
Vision-Language PerceptionVL Perception Benchmarks
TextVQA84.8
28
Vision-Language ReasoningVL Reasoning Benchmarks MathVista, MVerse, MathVision, MMMU, R1-OV, MMStar
MathVista Acc75.5
25
Vision-Language PerceptionVL Perception Benchmarks TextVQA, POPE, Seed-Bench
TextVQA Score82.9
25
Showing 4 of 4 rows

Other info

Follow for update