Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Reasoning Pattern Alignment Merging for Adaptive Reasoning

About

Recent large reasoning models (LRMs) have made substantial progress in complex reasoning tasks, yet they often generate lengthy reasoning paths for every query, incurring unnecessary computation and latency. Existing speed-up approaches typically rely on retraining the model or designing sophisticated prompting, which are either prohibitively expensive or highly sensitive to the input and prompt formulation. In this work, we study model merging as a lightweight alternative for efficient reasoning: by combining a long chain-of-thought (Long-CoT) reasoning model with a Short-CoT instruction model, we obtain an adaptive reasoner without training from scratch or requiring large-scale additional data. Building on this idea, we propose Reasoning Pattern Alignment Merging (RPAM), a layer-wise model merging framework based on feature alignment to facilitate query-adaptive reasoning. RPAM first constructs a small pattern-labeled calibration set that assigns each query an appropriate reasoning pattern. It then optimizes layer-wise merging coefficients by aligning the merged model's intermediate representations with those of the selected model, while a contrastive objective explicitly pushes them away from the non-selected model. Experiments on seven widely used reasoning benchmarks show that RPAM substantially reduces inference cost while maintaining strong performance. Upon article acceptance, we will provide open-source code to reproduce experiments for RPAM.

Zhaofeng Zhong, Wei Yuan, Tong Chen, Xiangyu Zhao, Quoc Viet Hung Nguyen, Hongzhi Yin• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH500 (test)
Accuracy96.4
381
Mathematical ReasoningGSM8K
Accuracy81.4
351
Scientific ReasoningGPQA
Accuracy28.8
50
Mathematical ReasoningOlympiad Bench
Accuracy39.9
23
Mathematical ReasoningMinerva Math
Accuracy38.2
14
Mathematical ReasoningGSM8K (test)
Accuracy95.7
11
General Reasoning SummaryAggregate (GSM8K, MATH500, Minerva Math, Olympiad Bench, AIME24, AIME25, GPQA)
Accuracy75.9
11
Mathematical ReasoningAIME24
Accuracy26.7
11
Mathematical ReasoningAIME25
Accuracy20
11
Scientific Question AnsweringGPQA
Accuracy66.7
11
Showing 10 of 12 rows

Other info

Follow for update