Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mutual Reasoning Makes Smaller LLMs Stronger Problem-Solvers

About

This paper introduces rStar, a self-play mutual reasoning approach that significantly improves reasoning capabilities of small language models (SLMs) without fine-tuning or superior models. rStar decouples reasoning into a self-play mutual generation-discrimination process. First, a target SLM augments the Monte Carlo Tree Search (MCTS) with a rich set of human-like reasoning actions to construct higher quality reasoning trajectories. Next, another SLM, with capabilities similar to the target SLM, acts as a discriminator to verify each trajectory generated by the target SLM. The mutually agreed reasoning trajectories are considered mutual consistent, thus are more likely to be correct. Extensive experiments across five SLMs demonstrate rStar can effectively solve diverse reasoning problems, including GSM8K, GSM-Hard, MATH, SVAMP, and StrategyQA. Remarkably, rStar boosts GSM8K accuracy from 12.51% to 63.91% for LLaMA2-7B, from 36.46% to 81.88% for Mistral-7B, from 74.53% to 91.13% for LLaMA3-8B-Instruct. Code will be available at https://github.com/zhentingqi/rStar.

Zhenting Qi, Mingyuan Ma, Jiahang Xu, Li Lyna Zhang, Fan Yang, Mao Yang• 2024

Related benchmarks

TaskDatasetResultRank
Uncertainty QuantificationPopQA 500 randomly sampled queries (test)
AUROC0.825
70
Uncertainty QuantificationHotpotQA 500 randomly sampled queries (test)
AUROC76.94
70
Uncertainty QuantificationMusique 500 randomly sampled queries (test)
AUROC0.7786
70
Mathematical ReasoningMathInstruct Scenario 1
Accuracy54.4
53
AbstentionPopQA (test)
AUARC63.13
25
AbstentionHotpotQA
Abstain Accuracy75.4
25
AbstentionMusiQ
Abstain Acc89.4
25
AbstentionMusiq (test)
AUARC29.63
25
AbstentionHotpot (test)
AUARC55.73
25
AbstentionPopQA
Abstain Accuracy77.2
25
Showing 10 of 10 rows

Other info

Follow for update