Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Towards Efficient CoT Distillation: Self-Guided Rationale Selector for Better Performance with Fewer Rationales

About

Chain-of-thought (CoT) distillation aims to enhance small language models' (SLMs) reasoning by transferring multi-step reasoning capability from the larger teacher models. However, existing work underestimates rationale quality, focusing primarily on data quantity, which may transfer noisy or incorrect information to the student model. To address the above issues, we proposed \textbf{M}odel-\textbf{O}riented \textbf{R}ationale \textbf{S}election \textbf{D}istillation (MoRSD), which can discern and select high quality rationales for distillation to improve performance further. We further propose a Rationale Difficulty (RD) metric to measure the ability of the student model to generate the correct answer under a given rationale. Compared to the baseline, we achieved 4.6$\%$ average improvement on seven datasets over three tasks, using fewer rationales by controlling their accuracy, diversity, and difficulty. Our results reveal that a small portion of the high quality rationales can enhance the reasoning ability of student models than the entire dataset. Our method promises to be a possible solution for efficient CoT distillation. Our code will be released in https://github.com/Leon221220/MoRSD.

Jianzhi Yan, Le Liu, Youcheng Pan, Shiwei Chen, Yang Xiang, Buzhou Tang• 2025

Related benchmarks

TaskDatasetResultRank
ReasoningARC-C--
80
Commonsense ReasoningCommonsenseQA
Accuracy (pass@1)40.43
45
ReasoningStrategyQA
Accuracy64.93
40
Mathematical ReasoningAIME 25
Average@16 Score7.5
26
Mathematical ReasoningAMC23
Average@1638.75
26
Mathematical ReasoningOlympiadBench
Pass@131.75
20
Mathematical ReasoningAIME24
AIME24 Avg@167.5
8
Showing 7 of 7 rows

Other info

Follow for update