Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Advancing LLM Reasoning Generalists with Preference Trees

About

We introduce Eurus, a suite of large language models (LLMs) optimized for reasoning. Finetuned from Mistral-7B and CodeLlama-70B, Eurus models achieve state-of-the-art results among open-source models on a diverse set of benchmarks covering mathematics, code generation, and logical reasoning problems. Notably, Eurus-70B beats GPT-3.5 Turbo in reasoning through a comprehensive benchmarking across 12 tests covering five tasks, and achieves a 33.3% pass@1 accuracy on LeetCode and 32.6% on TheoremQA, two challenging benchmarks, substantially outperforming existing open-source models by margins more than 13.3%. The strong performance of Eurus can be primarily attributed to UltraInteract, our newly-curated large-scale, high-quality alignment dataset specifically designed for complex reasoning tasks. UltraInteract can be used in both supervised fine-tuning and preference learning. For each instruction, it includes a preference tree consisting of (1) reasoning chains with diverse planning strategies in a unified format, (2) multi-turn interaction trajectories with the environment and the critique, and (3) pairwise data to facilitate preference learning. UltraInteract allows us to conduct an in-depth exploration of preference learning for reasoning tasks. Our investigation reveals that some well-established preference learning algorithms may be less suitable for reasoning tasks compared to their effectiveness in general conversations. Inspired by this, we derive a novel reward modeling objective which, together with UltraInteract, leads to a strong reward model.

Lifan Yuan, Ganqu Cui, Hanbin Wang, Ning Ding, Xingyao Wang, Jia Deng, Boji Shan, Huimin Chen, Ruobing Xie, Yankai Lin, Zhenghao Liu, Bowen Zhou, Hao Peng, Zhiyuan Liu, Maosong Sun• 2024

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningAMC
Accuracy62.7
151
Mathematical ReasoningMinerva--
138
Reward ModelingRewardBench
Avg Score83.01
118
Mathematical ReasoningAIME 24
Accuracy16.7
113
Mathematical ReasoningMATH 500
MATH 500 Accuracy83.8
106
Reward ModelingRM-Bench
Average Score65.9
53
Mathematical ReasoningOlympiadBench
Accuracy0.409
34
Mathematical ReasoningGSM8K 10 (test)
m1@t136.3
24
Mathematical ReasoningOlympiad
Accuracy (%)40.9
21
Mathematical ReasoningMATH 18 (test)
m1@t112.3
18
Showing 10 of 14 rows

Other info

Follow for update