Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MiroMind-M1: An Open-Source Advancement in Mathematical Reasoning via Context-Aware Multi-Stage Policy Optimization

About

Large language models have recently evolved from fluent text generation to advanced reasoning across diverse domains, giving rise to reasoning language models. Among these domains, mathematical reasoning serves as a representative benchmark as it requires precise multi-step logic and abstract reasoning, which can be generalized to other tasks. While closed-source RLMs such as GPT-o3 demonstrate impressive reasoning capabilities, their proprietary nature limits transparency and reproducibility. Although many open-source projects aim to close this gap, most of them lack sufficient openness by omitting critical resources such as datasets and detailed training configurations, which hinders reproducibility. To contribute toward greater transparency in RLM development, we introduce the MiroMind-M1 series, a set of fully open-source RLMs built on the Qwen-2.5 backbone that match or exceed the performance of existing open-source RLMs. Specifically, our models are trained in two stages: SFT on a carefully curated corpus of 719K math-reasoning problems with verified CoT trajectories, followed by RLVR on 62K challenging and verifiable problems. To enhance the robustness and efficiency of the RLVR process, we introduce Context-Aware Multi-Stage Policy Optimization, an algorithm that integrates length-progressive training with an adaptive repetition penalty to encourage context-aware RL training. Our model achieves state-of-the-art or competitive performance and superior token efficiency among Qwen-2.5-based open-source 7B and 32B models on the AIME24, AIME25, and MATH benchmarks. To facilitate reproducibility, we release the complete stack: models (MiroMind-M1-SFT-7B, MiroMind-M1-RL-7B, MiroMind-M1-RL-32B); datasets (MiroMind-M1-SFT-719K, MiroMind-M1-RL-62K); and all training and evaluation configurations. We hope these resources will support further research and foster community advancement.

Xingxuan Li, Yao Xiao, Dianwen Ng, Hai Ye, Yue Deng, Xiang Lin, Bin Wang, Zhanfeng Mo, Chong Zhang, Yueyi Zhang, Zonglin Yang, Ruilin Li, Lei Lei, Shihao Xu, Han Zhao, Weiling Chen, Feng Ji, Lidong Bing• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningOlympiadBench Math
Accuracy77
84
Mathematical ReasoningOmni-MATH
Accuracy54.5
68
Mathematical ReasoningHMMT 2025
Accuracy27.5
38
Mathematical ReasoningAIME 2025
Accuracy47.5
37
Multi-domain language model evaluationODA benchmark suite (test)
General Accuracy64.5
21
Mathematical ReasoningMath domain benchmarks (GSM8K, MATH500, Omni-Math, Olympiad, AIME'24) standard (test)
GSM8K Accuracy94.8
16
ReasoningReasoning domain benchmarks ARC-C, BBH, GPQA, CALM, KOR-BENCH
ARC-C Score92.9
16
General Language Understanding and ReasoningGeneral domain benchmarks (test)
DROP Score85
16
Code GenerationCode domain benchmarks
HumanEval82.3
16
Showing 9 of 9 rows

Other info

Follow for update