Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

When to use parametric models in reinforcement learning?

About

We examine the question of when and how parametric models are most useful in reinforcement learning. In particular, we look at commonalities and differences between parametric models and experience replay. Replay-based learning algorithms share important traits with model-based approaches, including the ability to plan: to use more computation without additional data to improve predictions and behaviour. We discuss when to expect benefits from either approach, and interpret prior work in this context. We hypothesise that, under suitable conditions, replay-based algorithms should be competitive to or better than model-based algorithms if the model is used only to generate fictional transitions from observed states for an update rule that is otherwise model-free. We validated this hypothesis on Atari 2600 video games. The replay-based algorithm attained state-of-the-art data efficiency, improving over prior results with parametric models.

Hado van Hasselt, Matteo Hessel, John Aslanides• 2019

Related benchmarks

TaskDatasetResultRank
Reinforcement LearningAtari100k (test)
Alien Score739.9
23
Reinforcement LearningAtari 100k steps (test)
Median HNS0.161
20
Reinforcement LearningAtari 100k
Alien Score802.3
18
Reinforcement LearningAtari 7-game suite ALE (test)
Relative Score2.065
13
Atari Games PerformanceAtari 100k
Mean Score (HNS)0.285
10
Reinforcement LearningAtari 26 100K environment steps
Alien Score739.9
9
Atari Game PlayingAtari 2600 Games 100k
Mean Human-Normalized Score28.5
6
Showing 7 of 7 rows

Other info

Follow for update