Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Long-Chain Reasoning Distillation via Adaptive Prefix Alignment

About

Large Language Models (LLMs) have demonstrated remarkable reasoning capabilities, particularly in solving complex mathematical problems. Recent studies show that distilling long reasoning trajectories can effectively enhance the reasoning performance of small-scale student models. However, teacher-generated reasoning trajectories are often excessively long and structurally complex, making them difficult for student models to learn. This mismatch leads to a gap between the provided supervision signal and the learning capacity of the student model. To address this challenge, we propose Prefix-ALIGNment distillation (P-ALIGN), a framework that fully exploits teacher CoTs for distillation through adaptive prefix alignment. Specifically, P-ALIGN adaptively truncates teacher-generated reasoning trajectories by determining whether the remaining suffix is concise and sufficient to guide the student model. Then, P-ALIGN leverages the teacher-generated prefix to supervise the student model, encouraging effective prefix alignment. Experiments on multiple mathematical reasoning benchmarks demonstrate that P-ALIGN outperforms all baselines by over 3%. Further analysis indicates that the prefixes constructed by P-ALIGN provide more effective supervision signals, while avoiding the negative impact of redundant and uncertain reasoning components. All code is available at https://github.com/NEUIR/P-ALIGN.

Zhenghao Liu, Zhuoyang Wu, Xinze Li, Yukun Yan, Shuo Wang, Zulong Chen, Yu Gu, Ge Yu, Maosong Sun• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningAIME24
Accuracy43.33
130
Mathematical ReasoningAIME 24
Pass@130
59
Mathematical ReasoningAIME 25
Pass@123.33
24
Mathematical ReasoningAMC12
Accuracy73.26
12
Mathematical ReasoningAMC12
Pass@166.27
10
Mathematical ReasoningMATH500
Pass@184.8
10
Showing 6 of 6 rows

Other info

Follow for update