Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ARROW: Augmented Replay for RObust World models

About

Continual reinforcement learning challenges agents to acquire new skills while retaining previously learned ones with the goal of improving performance in both past and future tasks. Most existing approaches rely on model-free methods with replay buffers to mitigate catastrophic forgetting; however, these solutions often face significant scalability challenges due to large memory demands. Drawing inspiration from neuroscience, where the brain replays experiences to a predictive World Model rather than directly to the policy, we present ARROW (Augmented Replay for RObust World models), a model-based continual RL algorithm that extends DreamerV3 with a memory-efficient, distribution-matching replay buffer. Unlike standard fixed-size FIFO buffers, ARROW maintains two complementary buffers: a short-term buffer for recent experiences and a long-term buffer that preserves task diversity through intelligent sampling. We evaluate ARROW on two challenging continual RL settings: Tasks without shared structure (Atari), and tasks with shared structure, where knowledge transfer is possible (Procgen CoinRun variants). Compared to model-free and model-based baselines with replay buffers of the same-size, ARROW demonstrates substantially less forgetting on tasks without shared structure, while maintaining comparable forward transfer. Our findings highlight the potential of model-based RL and bio-inspired approaches for continual reinforcement learning, warranting further research.

Abdulaziz Alyahya, Abdallah Al Siyabi, Markus R. Ernst, Luke Yang, Levin Kuhlmann, Gideon Kowadlo• 2026

Related benchmarks

TaskDatasetResultRank
Continual Reinforcement LearningCoinRun Normalized Continual Learning
Max Performance1.34
9
Continual LearningAtari Normalized continual learning
Max Performance1.04
9
Continual Reinforcement LearningAtari Reversed task order
Forgetting0.039
3
Continual Reinforcement LearningCoinRun
Forgetting0.407
3
Continual Reinforcement LearningCoinRun Reversed task order
Forgetting0.00e+0
3
Continual Reinforcement LearningAtari Default task order
Forgetting0.197
3
Continual Reinforcement LearningCoinRun Two-cycle (train)
C1 Final Score-0.111
3
Continual Reinforcement LearningAtari Two-cycle (train)
C1 Forward Score-0.036
3
Showing 8 of 8 rows

Other info

Follow for update