Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Deep Reinforcement Learning with Double Q-learning

About

The popular Q-learning algorithm is known to overestimate action values under certain conditions. It was not previously known whether, in practice, such overestimations are common, whether they harm performance, and whether they can generally be prevented. In this paper, we answer all these questions affirmatively. In particular, we first show that the recent DQN algorithm, which combines Q-learning with a deep neural network, suffers from substantial overestimations in some games in the Atari 2600 domain. We then show that the idea behind the Double Q-learning algorithm, which was introduced in a tabular setting, can be generalized to work with large-scale function approximation. We propose a specific adaptation to the DQN algorithm and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games.

Hado van Hasselt, Arthur Guez, David Silver• 2015

Related benchmarks

TaskDatasetResultRank
Multi-Objective Offline Policy EvaluationMIMIC-IV (test)
FQE0.574
66
Sepsis treatmentMIMIC-IV (test)
WIS0.664
66
Reinforcement LearningAtari 2600 MONTEZUMA'S REVENGE
Score42
45
Atari Game PlayingPitfall!
Score-30
25
Reinforcement LearningAtari 57
Atlantis6.48e+4
21
Reinforcement LearningAtari 2600 57 games
Median Human-Normalized Score117
20
Reinforcement LearningAtari 2600
Alien Score4.01e+3
15
Reinforcement LearningAtari 2600 57 games (test)
Median Human-Normalized Score118
15
Atari Game PlayingAtari 2600 57 games human starts evaluation metric
Median Human-Normalized Score110.9
14
Reinforcement LearningMountainCar
Avg Episode Reward-100
14
Showing 10 of 51 rows

Other info

Follow for update