Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Logic-RL: Unleashing LLM Reasoning with Rule-Based Reinforcement Learning

About

Inspired by the success of DeepSeek-R1, we explore the potential of rule-based reinforcement learning (RL) in large reasoning models. To analyze reasoning dynamics, we use synthetic logic puzzles as training data due to their controllable complexity and straightforward answer verification. We make some key technical contributions that lead to effective and stable RL training: a system prompt that emphasizes the thinking and answering process, a stringent format reward function that penalizes outputs for taking shortcuts, and a straightforward training recipe that achieves stable convergence. Our 7B model develops advanced reasoning skills-such as reflection, verification, and summarization-that are absent from the logic corpus. Remarkably, after training on just 5K logic problems, it demonstrates generalization abilities to the challenging math benchmarks AIME and AMC.

Tian Xie, Zitian Gao, Qingnan Ren, Haoming Luo, Yuqian Hong, Bryan Dai, Joey Zhou, Kai Qiu, Zhirong Wu, Chong Luo• 2025

Related benchmarks

TaskDatasetResultRank
WritingWritingBench
Score54.5
58
Logic reasoningZebraLogic
Score10.1
42
Knowledge ReasoningMMLU-Pro--
40
CodeHumanEval+
Accuracy64
34
CodingHumanEval
HumanEval Mean Score0.689
32
Large Language Model EvaluationMMLU, GSM8K, GPQA, HUMANEVAL, TRUTHFULQA, IFEVAL
MMLU63.8
23
Mathematical ReasoningMinerva
Avg@254.9
16
STEM ReasoningTheoremQA
Avg@252.2
16
Mathematical ReasoningMATH 500
Avg@276.5
16
Logic reasoningAutologic cn
Score25.1
16
Showing 10 of 12 rows

Other info

Follow for update