Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Param$\Delta$ for Direct Weight Mixing: Post-Train Large Language Model at Zero Cost

About

The post-training phase of large language models is essential for enhancing capabilities such as instruction-following, reasoning, and alignment with human preferences. However, it demands extensive high-quality data and poses risks like overfitting, alongside significant computational costs due to repeated post-training and evaluation after each base model update. This paper introduces $Param\Delta$, a novel method that streamlines post-training by transferring knowledge from an existing post-trained model to a newly updated base model with ZERO additional training. By computing the difference between post-trained model weights ($\Theta_\text{post}$) and base model weights ($\Theta_\text{base}$), and adding this to the updated base model ($\Theta'_\text{base}$), we define $Param\Delta$ Model as: $\Theta_{\text{Param}\Delta} = \Theta_\text{post} - \Theta_\text{base} + \Theta'_\text{base}$. This approach surprisingly equips the new base model with post-trained capabilities, achieving performance comparable to direct post-training. We did analysis on LLama3, Llama3.1, Qwen, and DeepSeek-distilled models. Results indicate $Param\Delta$ Model effectively replicates traditional post-training. For example, the $Param\Delta$ Model obtained from 70B Llama3-inst, Llama3-base, Llama3.1-base models attains approximately 95\% of Llama3.1-inst model's performance on average. $Param\Delta$ brings a new perspective on how to fully leverage models in the open-weight community, where checkpoints for base and instruct models are readily available and frequently updated, by providing a cost-free framework to accelerate the iterative cycle of model development.

Sheng Cao, Mingrui Wu, Karthik Prasad, Yuandong Tian, Zechun Liu• 2025

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval
Pass@178.7
1036
Instruction FollowingIFEval
IFEval Accuracy83.9
625
Science Question AnsweringARC Challenge
Accuracy94.3
342
Mathematical ReasoningMATH
Accuracy49.8
338
Multi-task Language UnderstandingMMLU
Accuracy81.7
321
Object DetectionFoggy Cityscapes
mAP44.42
60
Multilingual Mathematical ReasoningMGSM
Accuracy84.1
52
Science Question AnsweringGPQA
Accuracy42.2
42
Multi-task Language UnderstandingMMLU-Pro
Accuracy62.1
28
Object DetectionNight Clear
mAP4.86
21
Showing 10 of 17 rows

Other info

Follow for update