Every Step Evolves: Scaling Reinforcement Learning for Trillion-Scale Thinking Model
About
We present Ring-1T, the first open-source, state-of-the-art thinking model with a trillion-scale parameter. It features 1 trillion total parameters and activates approximately 50 billion per token. Training such models at a trillion-parameter scale introduces unprecedented challenges, including train-inference misalignment, inefficiencies in rollout processing, and bottlenecks in the RL system. To address these, we pioneer three interconnected innovations: (1) IcePop stabilizes RL training via token-level discrepancy masking and clipping, resolving instability from training-inference mismatches; (2) C3PO++ improves resource utilization for long rollouts under a token budget by dynamically partitioning them, thereby obtaining high time efficiency; and (3) ASystem, a high-performance RL framework designed to overcome the systemic bottlenecks that impede trillion-parameter model training. Ring-1T delivers breakthrough results across critical benchmarks: 93.4 on AIME-2025, 86.72 on HMMT-2025, 2088 on CodeForces, and 55.94 on ARC-AGI-1. Notably, it attains a silver medal-level result on the IMO-2025, underscoring its exceptional reasoning capabilities. By releasing the complete 1T parameter MoE model to the community, we provide the research community with direct access to cutting-edge reasoning capabilities. This contribution marks a significant milestone in democratizing large-scale reasoning intelligence and establishes a new baseline for open-source model performance.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reasoning | HellaSwag (HS) | HellaSwag Accuracy81.59 | 142 | |
| Reasoning | PIQA | Accuracy91.95 | 133 | |
| Text-to-SQL | Spider | Exec Acc (All)80.58 | 57 | |
| Coding | HumanEval+ | Pass@187.58 | 31 | |
| Knowledge | MMLU-Pro | Score77.55 | 30 | |
| Reasoning | DROP | Score88.32 | 21 | |
| Coding | MultiPL-E | Score67.09 | 20 | |
| Knowledge | GPQA | Score69.16 | 17 | |
| Alignment | IFEval strict prompt | pass@176.16 | 16 | |
| Knowledge | C-Eval | Score87.54 | 12 |