Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Prioritized Experience Replay

About

Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games.

Tom Schaul, John Quan, Ioannis Antonoglou, David Silver• 2015

Related benchmarks

TaskDatasetResultRank
Reinforcement LearningAtari 2600 MONTEZUMA'S REVENGE
Score13
45
Atari Game PlayingPitfall!
Score-15
25
Reinforcement LearningAtari 2600 57 games
Median Human-Normalized Score140
20
Visual Reinforcement LearningCARLA (#GP scenario)
ER51
15
Autonomous DrivingCARLA (#HW)
Error Rate159
15
Reinforcement LearningAtari 2600 57 games (test)
Median Human-Normalized Score124
15
Atari Game PlayingAtari 2600 57 games human starts evaluation metric
Median Human-Normalized Score128
14
Game PlayingAtari 2600 (Arcade Learning Environment) v1 (test)
Alien Score900.5
13
Reinforcement LearningAtari 2600 55 games (test)
Mean Human-Normalized Score580
7
Reinforcement LearningAtari 57 30 no-ops
Mean HNS434.6
6
Showing 10 of 20 rows

Other info

Follow for update