Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Deep Bandits Show-Off: Simple and Efficient Exploration with Deep Networks

About

Designing efficient exploration is central to Reinforcement Learning due to the fundamental problem posed by the exploration-exploitation dilemma. Bayesian exploration strategies like Thompson Sampling resolve this trade-off in a principled way by modeling and updating the distribution of the parameters of the action-value function, the outcome model of the environment. However, this technique becomes infeasible for complex environments due to the computational intractability of maintaining probability distributions over parameters of outcome models of corresponding complexity. Moreover, the approximation techniques introduced to mitigate this issue typically result in poor exploration-exploitation trade-offs, as observed in the case of deep neural network models with approximate posterior methods that have been shown to underperform in the deep bandit scenario. In this paper we introduce Sample Average Uncertainty (SAU), a simple and efficient uncertainty measure for contextual bandits. While Bayesian approaches like Thompson Sampling estimate outcomes uncertainty indirectly by first quantifying the variability over the parameters of the outcome model, SAU is a frequentist approach that directly estimates the uncertainty of the outcomes based on the value predictions. Importantly, we show theoretically that the uncertainty measure estimated by SAU asymptotically matches the uncertainty provided by Thompson Sampling, as well as its regret bounds. Because of its simplicity SAU can be seamlessly applied to deep contextual bandits as a very scalable drop-in replacement for epsilon-greedy exploration. We confirm empirically our theory by showing that SAU-based exploration outperforms current state-of-the-art deep Bayesian bandit methods on several real-world datasets at modest computation cost. Code is available at \url{https://github.com/ibm/sau-explore}.

Rong Zhu, Mattia Rigotti• 2021

Related benchmarks

TaskDatasetResultRank
Contextual BanditWheel Bandit delta=0.50
Normalized Cumulative Regret12.49
9
Contextual BanditWheel Bandit (delta=0.70)
Normalized Cumulative Regret13.72
9
Contextual BanditWheel Bandit delta=0.90
Normalized Cumulative Regret36.54
9
Contextual BanditWheel Bandit delta=0.95
Normalized Cumulative Regret63.3
9
Contextual BanditMushroom
Relative Cumulative Regret2.2
9
Contextual BanditStatlog
Relative Cumulative Regret0.6
9
Contextual BanditCovertype
Relative Cumulative Regret27.46
9
Contextual BanditFinancial
Relative Cumulative Regret5.26
9
Contextual BanditJester
Relative Cumulative Regret61.02
9
Contextual BanditAdult
Relative Cumulative Regret74.62
9
Showing 10 of 12 rows

Other info

Code

Follow for update