Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't

About

Enhancing the reasoning capabilities of large language models (LLMs) typically relies on massive computational resources and extensive datasets, limiting accessibility for resource-constrained settings. Our study investigates the potential of reinforcement learning (RL) to improve reasoning in small LLMs, focusing on a 1.5-billion-parameter model, DeepSeek-R1-Distill-Qwen-1.5B, under strict constraints: training on 4 NVIDIA A40 GPUs (48 GB VRAM each) within 24 hours. Adapting the Group Relative Policy Optimization (GRPO) algorithm and curating a compact, high-quality mathematical reasoning dataset, we conducted three experiments to explore model behavior and performance. Our results demonstrate rapid reasoning gains - e.g., AMC23 accuracy rising from 63% to 80% and AIME24 reaching 46.7%, surpassing o1-preview - using only 7,000 samples and a $42 training cost, compared to thousands of dollars for baseline models. However, challenges such as optimization instability and length constraints emerged with prolonged training. These findings highlight the efficacy of RL-based fine-tuning for small LLMs, offering a cost-effective alternative to large-scale approaches. We release our code and datasets as open-source resources, providing insights into trade-offs and laying a foundation for scalable, reasoning-capable LLMs in resource-limited environments. All are available at https://github.com/knoveleng/open-rs.

Quy-Anh Dang, Chris Ngo• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH 500
pass@193.2
153
General ReasoningMMLU-Pro
MMLU-Pro General Reasoning Avg@8 Acc0.52
51
ReasoningARC Challenge--
45
Mathematical ReasoningOlympiadBench
Pass@160
39
Mathematical ReasoningAIME 24
Avg@32 Accuracy51
23
Logical reasoningCountdown CD34
Avg@1678.2
14
Mathematical ReasoningAMC23
Avg@32 Accuracy90.5
14
Logical reasoningCountdown CD4
Avg@1658.7
14
General ReasoningGPQA Diamond
Avg@8 Accuracy26.8
14
Showing 9 of 9 rows

Other info

Follow for update