Graph Reasoning Paradigm: Structured and Symbolic Reasoning with Topology-Aware Reinforcement Learning for Large Language Models
About
Long Chain-of-Thought (LCoT), achieved by Reinforcement Learning with Verifiable Rewards (RLVR), has proven effective in enhancing the reasoning capabilities of Large Language Models (LLMs). However, reasoning in current LLMs is primarily generated as plain text, where performing semantic evaluation on such unstructured data creates a computational bottleneck during training. Despite RLVR-based optimization, existing methods still suffer from coarse-grained supervision, reward hacking, high training costs, and poor generalization. To address these issues, we propose the Graph Reasoning Paradigm (GRP), which realizes structured and symbolic reasoning, implemented via graph-structured representations with step-level cognitive labels. Building upon GRP, we further design Process-Aware Stratified Clipping Group Relative Policy Optimization (PASC-GRPO), which leverages structured evaluation to replace semantic evaluation, achieves process-aware verification through graph-structured outcome rewards, and mitigates reward hacking via stratified clipping advantage estimation. Experiments demonstrate significant improvements across mathematical reasoning and code generation tasks. Data, models, and code will be released later.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Code Generation | HumanEval+ | -- | 189 | |
| Code Generation | MBPP | Accuracy (%)77.49 | 146 | |
| Code Generation | MBPP+ | Accuracy65.24 | 75 | |
| Code Generation | LiveCodeBench | Accuracy56.12 | 32 | |
| Mathematical Reasoning | AIME 25 | avg@16 Accuracy38.79 | 12 | |
| Mathematical Reasoning | AIME 24 | Average Score @ 1646.67 | 11 | |
| Code Generation | MBPP | MBPP avg@16 Accuracy77.49 | 9 | |
| Code Generation | MBPP+ | Avg@16 Accuracy65.24 | 9 | |
| Code Generation | HumanEval | Avg Accuracy @1688.43 | 9 | |
| Code Generation | LiveCodeBench | Avg@3 Accuracy56.12 | 9 |