Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Bayesian Bellman Operators

About

We introduce a novel perspective on Bayesian reinforcement learning (RL); whereas existing approaches infer a posterior over the transition distribution or Q-function, we characterise the uncertainty in the Bellman operator. Our Bayesian Bellman operator (BBO) framework is motivated by the insight that when bootstrapping is introduced, model-free approaches actually infer a posterior over Bellman operators, not value functions. In this paper, we use BBO to provide a rigorous theoretical analysis of model-free Bayesian RL to better understand its relationshipto established frequentist RL methodologies. We prove that Bayesian solutions are consistent with frequentist RL solutions, even when approximate inference isused, and derive conditions for which convergence properties hold. Empirically, we demonstrate that algorithms derived from the BBO framework have sophisticated deep exploration properties that enable them to solve continuous control tasks at which state-of-the-art regularised actor-critic algorithms fail catastrophically

Matthew Fellows, Kristian Hartikainen, Shimon Whiteson• 2021

Related benchmarks

TaskDatasetResultRank
Policy Evaluation400-State Random MDP on-policy
MSE0.07
7
Policy Evaluation400-State Random MDP (off-policy)
MSE0.11
7
Policy EvaluationCart-Pole on-policy, perfect features
MSE0.15
7
Policy EvaluationCart-Pole off-policy perfect features
MSE0.17
7
Policy Evaluation20-Link Pole on-policy
MSE4.26
7
Policy Evaluation20-Link Pole off-policy
MSE4.17
7
Policy Evaluation400-State Random MDP on-policy
Sum of sqrt MSE24.74
7
Policy Evaluation20-Link Pole off-policy
Sum of sqrt MSE415.4
7
Policy Evaluation14-State Boyan Chain on-policy
MSE0.16
7
Policy EvaluationCart-Pole on-policy, impoverished features
MSE2.46
7
Showing 10 of 13 rows

Other info

Follow for update