Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Averaged-DQN: Variance Reduction and Stabilization for Deep Reinforcement Learning

About

Instability and variability of Deep Reinforcement Learning (DRL) algorithms tend to adversely affect their performance. Averaged-DQN is a simple extension to the DQN algorithm, based on averaging previously learned Q-values estimates, which leads to a more stable training procedure and improved performance by reducing approximation error variance in the target values. To understand the effect of the algorithm, we examine the source of value function estimation errors and provide an analytical comparison within a simplified model. We further present experiments on the Arcade Learning Environment benchmark that demonstrate significantly improved stability and performance due to the proposed extension.

Oron Anschel, Nir Baram, Nahum Shimkin• 2016

Related benchmarks

TaskDatasetResultRank
Futures TradingBTC Crypto Market
Total Return (%)25.11
13
Futures TradingETH Crypto Market
Total Return (%)-46
13
Futures TradingBNB Crypto Market
Total Return (%)-64.79
13
Futures TradingDOT Crypto Market
TR (%)-65.65
13
Dynamic Computation AllocationDisplay advertising dataset (offline)
RS Score0.87
9
Showing 5 of 5 rows

Other info

Follow for update