Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning from Committee: Reasoning Distillation from a Mixture of Teachers with Peer-Review

About

While reasoning capabilities typically emerge in large language models (LLMs) with tens of billions of parameters, recent research focuses on improving smaller open-source models through knowledge distillation (KD) from commercial LLMs. However, many of these studies rely solely on responses from a single LLM as the gold rationale, unlike the natural human learning process, which involves understanding both the correct answers and the reasons behind mistakes. In this paper, we introduce a novel Fault-Aware DistIllation via Peer-Review (FAIR) approach: 1) instead of merely obtaining rationales from teachers, our method asks teachers to identify and explain the student's mistakes, providing customized instruction learning data; 2) we design a simulated peer-review process between teacher LLMs, and selects only the generated rationales above the acceptance threshold, which reduces the chance of teachers guessing correctly with flawed rationale, improving instructional data quality. Comprehensive experiments and analysis on mathematical, commonsense, and logical reasoning tasks demonstrate the effectiveness of our method. Our code is available at https://github.com/zhuochunli/Learn-from-Committee.

Zhuochun Li, Yuelyu Ji, Rui Meng, Daqing He• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningCSQA
Accuracy80.52
366
Mathematical ReasoningSVAMP (test)
Accuracy84.33
233
Commonsense ReasoningStrategyQA
Accuracy70.87
125
Mathematical ReasoningMATH 500
MATH 500 Accuracy77.25
106
Commonsense ReasoningStrategyQA (test)
Accuracy73.07
81
Mathematical ReasoningGSM8K original (test)
Accuracy79.3
44
Logical reasoningLogiQA original (test)
Accuracy43.16
22
Mathematical ReasoningSVAMP
Accuracy91.51
21
Showing 8 of 8 rows

Other info

Follow for update