Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Recurrent Off-Policy Deep Reinforcement Learning Doesn't Have to be Slow

About

Recurrent off-policy deep reinforcement learning models achieve state-of-the-art performance but are often sidelined due to their high computational demands. In response, we introduce RISE (Recurrent Integration via Simplified Encodings), a novel approach that can leverage recurrent networks in any image-based off-policy RL setting without significant computational overheads via using both learnable and non-learnable encoder layers. When integrating RISE into leading non-recurrent off-policy RL algorithms, we observe a 35.6% human-normalized interquartile mean (IQM) performance improvement across the Atari benchmark. We analyze various implementation strategies to highlight the versatility and potential of our proposed framework.

Tyler Clark, Christine Evers, Jonathon Hare• 2025

Related benchmarks

TaskDatasetResultRank
Atari Game PlayingAtari-57 (test)
Alien Score2.38e+4
8
Showing 1 of 1 rows

Other info

Follow for update