Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Demonstration-Guided Continual Reinforcement Learning in Dynamic Environments

About

Reinforcement learning (RL) excels in various applications but struggles in dynamic environments where the underlying Markov decision process evolves. Continual reinforcement learning (CRL) enables RL agents to continually learn and adapt to new tasks, but balancing stability (preserving prior knowledge) and plasticity (acquiring new knowledge) remains challenging. Existing methods primarily address the stability-plasticity dilemma through mechanisms where past knowledge influences optimization but rarely affects the agent's behavior directly, which may hinder effective knowledge reuse and efficient learning. In contrast, we propose demonstration-guided continual reinforcement learning (DGCRL), which stores prior knowledge in an external, self-evolving demonstration repository that directly guides RL exploration and adaptation. For each task, the agent dynamically selects the most relevant demonstration and follows a curriculum-based strategy to accelerate learning, gradually shifting from demonstration-guided exploration to fully self-exploration. Extensive experiments on 2D navigation and MuJoCo locomotion tasks demonstrate its superior average performance, enhanced knowledge transfer, mitigation of forgetting, and training efficiency. The additional sensitivity analysis and ablation study further validate its effectiveness.

Xue Yang, Michael Schukat, Junlin Lu, Patrick Mannion, Karl Mason, Enda Howley• 2025

Related benchmarks

TaskDatasetResultRank
2D NavigationNavigation v1
Average Precision-6.74
6
2D NavigationNavigation v2
AP-7.72
6
2D NavigationNavigation v3
AP-3.25
6
Robotic ControlHopper
AP93.85
6
Robotic ControlAnt
AP80.25
6
Robotic ControlHalf Cheetah
AP-3.58
6
Showing 6 of 6 rows

Other info

Follow for update