Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

General-Reasoner: Advancing LLM Reasoning Across All Domains

About

Reinforcement learning (RL) has recently demonstrated strong potential in enhancing the reasoning capabilities of large language models (LLMs). Particularly, the "Zero" reinforcement learning introduced by Deepseek-R1-Zero, enables direct RL training of base LLMs without relying on an intermediate supervised fine-tuning stage. Despite these advancements, current works for LLM reasoning mainly focus on mathematical and coding domains, largely due to data abundance and the ease of answer verification. This limits the applicability and generalization of such models to broader domains, where questions often have diverse answer representations, and data is more scarce. In this paper, we propose General-Reasoner, a novel training paradigm designed to enhance LLM reasoning capabilities across diverse domains. Our key contributions include: (1) constructing a large-scale, high-quality dataset of questions with verifiable answers curated by web crawling, covering a wide range of disciplines; and (2) developing a generative model-based answer verifier, which replaces traditional rule-based verification with the capability of chain-of-thought and context-awareness. We train a series of models and evaluate them on a wide range of datasets covering wide domains like physics, chemistry, finance, electronics etc. Our comprehensive evaluation across these 12 benchmarks (e.g. MMLU-Pro, GPQA, SuperGPQA, TheoremQA, BBEH and MATH AMC) demonstrates that General-Reasoner outperforms existing baseline methods, achieving robust and generalizable reasoning performance while maintaining superior effectiveness in mathematical reasoning tasks.

Xueguang Ma, Qian Liu, Dongfu Jiang, Ge Zhang, Zejun Ma, Wenhu Chen• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningAMC
Accuracy64.8
151
Mathematical ReasoningMinerva
Pass@139.72
138
Mathematical ReasoningMATH
Pass@182.45
112
Mathematical ReasoningAMC
Pass@173.56
112
Mathematical ReasoningOlympiad
Accuracy47.7
92
General ReasoningMMLU-Pro
MMLU-Pro General Reasoning Avg@8 Acc65.1
51
Mathematical ReasoningOlympiad
Pass@143.4
50
Logic reasoningZebraLogic
Score8.9
42
Mathematical ReasoningMathematical Reasoning Benchmarks (GSM8K, MATH, AMC23, Olympiad, Minerva) (test)
GSM8K Accuracy92.7
32
Scientific ReasoningGPQA Diamond
Pass@10.561
32
Showing 10 of 32 rows

Other info

Follow for update