Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Probing to Refine: Reinforcement Distillation of LLMs via Explanatory Inversion

About

Distilling robust reasoning capabilities from large language models (LLMs) into smaller, computationally efficient student models remains an unresolved challenge. Despite recent advances, distilled models frequently suffer from superficial pattern memorization and subpar generalization. To overcome these limitations, we introduce a novel distillation framework that moves beyond simple mimicry to instill a deeper conceptual understanding. Our framework features two key innovations. \underline{\textit{First}}, to address pattern memorization, Explanatory Inversion (EI) generates targeted ``explanatory probes'' that compel the student to articulate the underlying logic behind an answer, rather than just memorizing it. \underline{\textit{Second}}, to improve generalization, Explanatory GRPO (\texttt{EXGRPO}) uses a reinforcement learning algorithm with a novel Dialogue Structure Utility Bonus, which explicitly rewards the student for maintaining a coherent reasoning process across these probes. Extensive evaluations on 12 datasets demonstrate significant improvements. Using Gemma-7b as the student model, our method yields an average \textbf{20.39\%} increase over zero-shot performance and a \textbf{6.02\%} improvement over the state-of-the-art distillation baselines. Moreover, models distilled with our method show remarkable training efficiency (e.g., surpassing vanilla fine-tuning with \textbf{10-25\%} training data) and strong generalization to out-of-distribution tasks. Implementation is released at https://github.com/Zhen-Tan-dmml/ExGRPO.git.

Zhen Tan, Chengshuai Zhao, Song Wang, Jundong Li, Tianlong Chen, Huan Liu• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy93.51
499
Mathematical ReasoningMATH
Accuracy80.59
338
Common Sense ReasoningBoolQ
Accuracy80.3
212
Mathematical ReasoningTabMWP
Accuracy97.61
188
Commonsense ReasoningCSQA
CSQA Accuracy81.45
126
ReasoningOpenBookQA
Accuracy86.41
77
Natural Language InferenceaNLI
Accuracy74.02
65
Question AnsweringARC-C
Accuracy96.82
54
Question AnsweringSQA
Accuracy79.62
24
ReasoningDate
Accuracy on Date85.75
24
Showing 10 of 12 rows

Other info

Follow for update