Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models

About

Mathematical reasoning poses a significant challenge for language models due to its complex and structured nature. In this paper, we introduce DeepSeekMath 7B, which continues pre-training DeepSeek-Coder-Base-v1.5 7B with 120B math-related tokens sourced from Common Crawl, together with natural language and code data. DeepSeekMath 7B has achieved an impressive score of 51.7% on the competition-level MATH benchmark without relying on external toolkits and voting techniques, approaching the performance level of Gemini-Ultra and GPT-4. Self-consistency over 64 samples from DeepSeekMath 7B achieves 60.9% on MATH. The mathematical reasoning capability of DeepSeekMath is attributed to two key factors: First, we harness the significant potential of publicly available web data through a meticulously engineered data selection pipeline. Second, we introduce Group Relative Policy Optimization (GRPO), a variant of Proximal Policy Optimization (PPO), that enhances mathematical reasoning abilities while concurrently optimizing the memory usage of PPO.

Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y.K. Li, Y. Wu, Daya Guo• 2024

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy92.8
1362
Code GenerationHumanEval
Pass@194.71
1036
Question AnsweringARC Challenge
Accuracy52
906
Mathematical ReasoningGSM8K (test)
Accuracy86.7
900
Mathematical ReasoningMATH
Accuracy87.8
882
Mathematical ReasoningGSM8K (test)
Accuracy88.2
770
Robot ManipulationLIBERO
Goal Achievement10.6
700
ReasoningBBH
Accuracy86.1
672
Instruction FollowingIFEval
IFEval Accuracy85
625
Mathematical ReasoningMATH
Accuracy87.8
535
Showing 10 of 1421 rows
...

Other info

Code

Follow for update