Delayed Homomorphic Reinforcement Learning for Environments with Delayed Feedback
About
Reinforcement learning in real-world systems is often accompanied by delayed feedback, which breaks the Markov assumption and impedes both learning and control. Canonical state augmentation approaches cause the state-space explosion, which introduces a severe sample-complexity burden. Despite recent progress, the state-of-the-art augmentation-based baselines remain incomplete: they either predominantly reduce the burden on the critic or adopt non-unified treatments for the actor and critic. To provide a structured and sample-efficient solution, we propose delayed homomorphic reinforcement learning (DHRL), a framework grounded in MDP homomorphisms that collapses belief-equivalent augmented states and enables efficient policy learning on the resulting abstract MDP without loss of optimality. We provide theoretical analyses of state-space compression bounds and sample complexity, and introduce a practical algorithm. Experiments on continuous control tasks in MuJoCo benchmark confirm that our algorithm outperforms strong augmentation-based baselines, particularly under long delays.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reinforcement Learning | HalfCheetah v3 | Mean Reward5.23e+3 | 34 | |
| Reinforcement Learning | InvertedPendulum v2 | Mean Reward949.8 | 27 | |
| Reinforcement Learning | Humanoid v3 | Avg Final Return3.32e+3 | 26 | |
| Reinforcement Learning | Hopper v3 | Average Final Return2.51e+3 | 26 | |
| Reinforcement Learning | Ant v3 | Average Final Return3.85e+3 | 26 | |
| Reinforcement Learning | Walker2d v3 | Average Final Return2.70e+3 | 26 |