Advancing LLM Reasoning Generalists with Preference Trees
About
We introduce Eurus, a suite of large language models (LLMs) optimized for reasoning. Finetuned from Mistral-7B and CodeLlama-70B, Eurus models achieve state-of-the-art results among open-source models on a diverse set of benchmarks covering mathematics, code generation, and logical reasoning problems. Notably, Eurus-70B beats GPT-3.5 Turbo in reasoning through a comprehensive benchmarking across 12 tests covering five tasks, and achieves a 33.3% pass@1 accuracy on LeetCode and 32.6% on TheoremQA, two challenging benchmarks, substantially outperforming existing open-source models by margins more than 13.3%. The strong performance of Eurus can be primarily attributed to UltraInteract, our newly-curated large-scale, high-quality alignment dataset specifically designed for complex reasoning tasks. UltraInteract can be used in both supervised fine-tuning and preference learning. For each instruction, it includes a preference tree consisting of (1) reasoning chains with diverse planning strategies in a unified format, (2) multi-turn interaction trajectories with the environment and the critique, and (3) pairwise data to facilitate preference learning. UltraInteract allows us to conduct an in-depth exploration of preference learning for reasoning tasks. Our investigation reveals that some well-established preference learning algorithms may be less suitable for reasoning tasks compared to their effectiveness in general conversations. Inspired by this, we derive a novel reward modeling objective which, together with UltraInteract, leads to a strong reward model.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mathematical Reasoning | AMC | Accuracy62.7 | 151 | |
| Mathematical Reasoning | Minerva | -- | 138 | |
| Reward Modeling | RewardBench | Avg Score83.01 | 118 | |
| Mathematical Reasoning | AIME 24 | Accuracy16.7 | 113 | |
| Mathematical Reasoning | MATH 500 | MATH 500 Accuracy83.8 | 106 | |
| Reward Modeling | RM-Bench | Average Score65.9 | 53 | |
| Mathematical Reasoning | OlympiadBench | Accuracy0.409 | 34 | |
| Mathematical Reasoning | GSM8K 10 (test) | m1@t136.3 | 24 | |
| Mathematical Reasoning | Olympiad | Accuracy (%)40.9 | 21 | |
| Mathematical Reasoning | MATH 18 (test) | m1@t112.3 | 18 |