Beyond Test-Time Training: Learning to Reason via Hardware-Efficient Optimal Control
About
Associative memory has long underpinned the design of sequential models. Beyond recall, humans reason by projecting future states and selecting goal-directed actions, a capability that modern language models increasingly require but do not natively encode. While prior work uses reinforcement learning or test-time training, planning remains external to the model architecture. We formulate reasoning as optimal control and introduce the Test-Time Control (TTC) layer, which performs finite-horizon LQR planning over latent states at inference time, represents a value function within neural architectures, and leverages it as the nested objective to enable planning before prediction. To ensure scalability, we derive a hardware-efficient LQR solver based on a symplectic formulation and implement it as a fused CUDA kernel, enabling parallel execution with minimal overhead. Integrated as an adapter into pretrained LLMs, TTC layers improve mathematical reasoning performance by up to +27.8% on MATH-500 and 2-3x Pass@8 improvements on AMC and AIME, demonstrating that embedding optimal control as an architectural component provides an effective and scalable mechanism for reasoning beyond test-time training.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mathematical Reasoning | MATH 500 | pass@152.8 | 239 | |
| Mathematical Reasoning | AIME 25 | -- | 40 | |
| Mathematical Reasoning | AMC | Acc@823.34 | 27 | |
| Mathematical Reasoning | AIME 24 | Accuracy@83.33 | 14 | |
| Sudoku Solving | Sudoku 10k 9x9 boards (val) | Board Accuracy93.4 | 12 |