Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Delayed Homomorphic Reinforcement Learning for Environments with Delayed Feedback

About

Reinforcement learning in real-world systems is often accompanied by delayed feedback, which breaks the Markov assumption and impedes both learning and control. Canonical state augmentation approaches cause the state-space explosion, which introduces a severe sample-complexity burden. Despite recent progress, the state-of-the-art augmentation-based baselines remain incomplete: they either predominantly reduce the burden on the critic or adopt non-unified treatments for the actor and critic. To provide a structured and sample-efficient solution, we propose delayed homomorphic reinforcement learning (DHRL), a framework grounded in MDP homomorphisms that collapses belief-equivalent augmented states and enables efficient policy learning on the resulting abstract MDP without loss of optimality. We provide theoretical analyses of state-space compression bounds and sample complexity, and introduce a practical algorithm. Experiments on continuous control tasks in MuJoCo benchmark confirm that our algorithm outperforms strong augmentation-based baselines, particularly under long delays.

Jongsoo Lee, Jangwon Kim, Soohee Han• 2026

Related benchmarks

TaskDatasetResultRank
Reinforcement LearningHalfCheetah v3
Mean Reward5.23e+3
34
Reinforcement LearningInvertedPendulum v2
Mean Reward949.8
27
Reinforcement LearningHumanoid v3
Avg Final Return3.32e+3
26
Reinforcement LearningHopper v3
Average Final Return2.51e+3
26
Reinforcement LearningAnt v3
Average Final Return3.85e+3
26
Reinforcement LearningWalker2d v3
Average Final Return2.70e+3
26
Showing 6 of 6 rows

Other info

Follow for update