Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Skywork Open Reasoner 1 Technical Report

About

The success of DeepSeek-R1 underscores the significant role of reinforcement learning (RL) in enhancing the reasoning capabilities of large language models (LLMs). In this work, we present Skywork-OR1, an effective and scalable RL implementation for long Chain-of-Thought (CoT) models. Building on the DeepSeek-R1-Distill model series, our RL approach achieves notable performance gains, increasing average accuracy across AIME24, AIME25, and LiveCodeBench from 57.8% to 72.8% (+15.0%) for the 32B model and from 43.6% to 57.5% (+13.9%) for the 7B model. Our Skywork-OR1-32B model surpasses both DeepSeek-R1 and Qwen3-32B on the AIME24 and AIME25 benchmarks, while achieving comparable results on LiveCodeBench. The Skywork-OR1-7B and Skywork-OR1-Math-7B models demonstrate competitive reasoning capabilities among models of similar size. We perform comprehensive ablation studies on the core components of our training pipeline to validate their effectiveness. Additionally, we thoroughly investigate the phenomenon of entropy collapse, identify key factors affecting entropy dynamics, and demonstrate that mitigating premature entropy collapse is critical for improved test performance. To support community research, we fully open-source our model weights, training code, and training datasets.

Jujie He, Jiacai Liu, Chris Yuhao Liu, Rui Yan, Chaojie Wang, Peng Cheng, Xiaoyu Zhang, Fuxiang Zhang, Jiacheng Xu, Wei Shen, Siyuan Li, Liang Zeng, Tianwen Wei, Cheng Cheng, Bo An, Yang Liu, Yahui Zhou• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMinerva Math
Accuracy49.3
209
Mathematical ReasoningAIME 2024 (test)--
159
Mathematical ReasoningOlympiad Bench
Accuracy73.5
123
Mathematical ReasoningHMMT 2025--
70
Mathematical ReasoningAIME 2025 (test)
Pass@1 Rate73.3
63
Mathematical ReasoningAMC 23
Accuracy73.5
56
Mathematical ReasoningMath Benchmarks Average
Accuracy (ACC)54.52
35
Mathematical Reasoning Process EvaluationPROCESSBENCH
GSM8K Accuracy70.8
28
Mathematical ReasoningAIME 25
Mean Accuracy52.3
26
Mathematical ReasoningCompetition-level Math Benchmarks AIME24, AIME25, AMC23, MATH500, Olympiad, Minerva--
21
Showing 10 of 24 rows

Other info

Follow for update