Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards Flash Thinking via Decoupled Advantage Policy Optimization

About

Recent Large Reasoning Models (LRMs) have achieved remarkable performance in solving complex problems via supervised fine-tuning (SFT) and reinforcement learning (RL). Although existing RL algorithms significantly enhance model accuracy, they still suffer from excessively lengthy responses and overthinking issues, resulting in increased inference latency and computational consumption, especially for simple tasks that require minimal reasoning. To address this, we propose a novel RL framework, DEPO, to reduce inefficient reasoning for models. Our method mainly consists of three core components: (1) an innovative advantage decoupled algorithm to guide model reduction of inefficient tokens; (2) a difficulty-aware length penalty to lower the overall length of model responses; (3) an advantage clipping method to prevent bias in policy optimization. In our experiments, applied to DeepSeek-Distill-Qwen-7B and DeepSeek-Distill-Qwen-1.5B as base models, DEPO achieves a significant reduction in sequence length by 39% and reduces excessive reasoning paths in inefficient tokens, while outperforming the base model in overall accuracy.

Zezhong Tan, Hang Gao, Xinhong Ma, Feng Zhang, Ziqiang Dong• 2025

Related benchmarks

TaskDatasetResultRank
Math ReasoningMATH 500
Accuracy94.4
38
Math ReasoningAIME 2024
Accuracy0.527
37
Math ReasoningAIME 2025
Accuracy39.2
33
Math ReasoningAMC 2023
Accuracy90.5
26
Showing 4 of 4 rows

Other info

Follow for update