Reasoning over mathematical objects: on-policy reward modeling and test time aggregation
About
The ability to precisely derive mathematical objects is a core requirement for downstream STEM applications, including mathematics, physics, and chemistry, where reasoning must culminate in formally structured expressions. Yet, current LM evaluations of mathematical and scientific reasoning rely heavily on simplified answer formats such as numerical values or multiple choice options due to the convenience of automated assessment. In this paper we provide three contributions for improving reasoning over mathematical objects: (i) we build and release training data and benchmarks for deriving mathematical objects, the Principia suite; (ii) we provide training recipes with strong LLM-judges and verifiers, where we show that on-policy judge training boosts performance; (iii) we show how on-policy training can also be used to scale test-time compute via aggregation. We find that strong LMs such as Qwen3-235B and o3 struggle on Principia, while our training recipes can bring significant improvements over different LLM backbones, while simultaneously improving results on existing numerical and MCQA tasks, demonstrating cross-format generalization of reasoning abilities.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Instruction Following | AlpacaEval 2.0 | Win Rate77.1 | 507 | |
| Mathematical Reasoning | HMMT 2025 | -- | 70 | |
| Scientific Reasoning | PrincipiaBench | RealMath Score36.84 | 50 | |
| Competition Math | Competition Math | AIME Score71.2 | 20 | |
| Competition Mathematics | BrumoMath 2025 | Pass@136.25 | 20 | |
| Competition Mathematics | Olympiad | Pass@154.89 | 20 | |
| Competition Mathematics | Competition Math Average | Pass@134 | 20 | |
| Competition Mathematics | AIME 2025 | Pass@127.25 | 20 | |
| Multiple-choice Question Answering | GPQA Diamond set (test) | Accuracy (Overall)57.48 | 20 | |
| Multiple-choice Question Answering | SuperGPQA MCQA | Accuracy63.83 | 18 |