Decoupled Continuous-Time Reinforcement Learning via Hamiltonian Flow
About
Many real-world control problems, ranging from finance to robotics, evolve in continuous time with non-uniform, event-driven decisions. Standard discrete-time reinforcement learning (RL), based on fixed-step Bellman updates, struggles in this setting: as time gaps shrink, the $Q$-function collapses to the value function $V$, eliminating action ranking. Existing continuous-time methods reintroduce action information via an advantage-rate function $q$. However, they enforce optimality through complicated martingale losses or orthogonality constraints, which are sensitive to the choice of test processes. These approaches entangle $V$ and $q$ into a large, complex optimization problem that is difficult to train reliably. To address these limitations, we propose a novel decoupled continuous-time actor-critic algorithm with alternating updates: $q$ is learned from diffusion generators on $V$, and $V$ is updated via a Hamiltonian-based value flow that remains informative under infinitesimal time steps, where standard max/softmax backups fail. Theoretically, we prove rigorous convergence via new probabilistic arguments, sidestepping the challenge that generator-based Hamiltonians lack Bellman-style contraction under the sup-norm. Empirically, our method outperforms prior continuous-time and leading discrete-time baselines across challenging continuous-control benchmarks and a real-world trading task, achieving 21% profit over a single quarter$-$nearly doubling the second-best method.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reinforcement Learning | Walker | Average Returns1.04e+3 | 38 | |
| Quadruped | Quadruped | Return959.8 | 33 | |
| Reinforcement Learning | Humanoid | Zero-Shot Reward386.8 | 30 | |
| Reinforcement Learning | cheetah | Return934.8 | 24 | |
| Reinforcement Learning | Trading | Return37.72 | 24 | |
| Robot Locomotion | Humanoid | Cumulative Reward386.8 | 16 | |
| Continuous Control | cheetah | Average Reward934.8 | 12 | |
| Trading | Trading | Return37.72 | 9 |