Every Step Evolves: Scaling Reinforcement Learning for Trillion-Scale Thinking Model
About
We present Ring-1T, the first open-source, state-of-the-art thinking model with a trillion-scale parameter. It features 1 trillion total parameters and activates approximately 50 billion per token. Training such models at a trillion-parameter scale introduces unprecedented challenges, including train-inference misalignment, inefficiencies in rollout processing, and bottlenecks in the RL system. To address these, we pioneer three interconnected innovations: (1) IcePop stabilizes RL training via token-level discrepancy masking and clipping, resolving instability from training-inference mismatches; (2) C3PO++ improves resource utilization for long rollouts under a token budget by dynamically partitioning them, thereby obtaining high time efficiency; and (3) ASystem, a high-performance RL framework designed to overcome the systemic bottlenecks that impede trillion-parameter model training. Ring-1T delivers breakthrough results across critical benchmarks: 93.4 on AIME-2025, 86.72 on HMMT-2025, 2088 on CodeForces, and 55.94 on ARC-AGI-1. Notably, it attains a silver medal-level result on the IMO-2025, underscoring its exceptional reasoning capabilities. By releasing the complete 1T parameter MoE model to the community, we provide the research community with direct access to cutting-edge reasoning capabilities. This contribution marks a significant milestone in democratizing large-scale reasoning intelligence and establishes a new baseline for open-source model performance.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reasoning | HellaSwag (HS) | HellaSwag Accuracy81.59 | 162 | |
| Reasoning | PIQA | Accuracy91.95 | 145 | |
| Text-to-SQL | Spider | Exec Acc (All)80.58 | 91 | |
| Coding | HumanEval+ | Pass@187.58 | 83 | |
| Knowledge | MMLU-Pro | Score77.55 | 48 | |
| Knowledge | GPQA | Score69.16 | 35 | |
| Reasoning | DROP | Score88.32 | 27 | |
| Math | GSM-PLUS | Score89.71 | 22 | |
| Reasoning | MuSR | Accuracy71.36 | 20 | |
| Coding | MultiPL-E | Score67.09 | 20 |