Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Neural Thompson Sampling

About

Thompson Sampling (TS) is one of the most effective algorithms for solving contextual multi-armed bandit problems. In this paper, we propose a new algorithm, called Neural Thompson Sampling, which adapts deep neural networks for both exploration and exploitation. At the core of our algorithm is a novel posterior distribution of the reward, where its mean is the neural network approximator, and its variance is built upon the neural tangent features of the corresponding neural network. We prove that, provided the underlying reward function is bounded, the proposed algorithm is guaranteed to achieve a cumulative regret of $\mathcal{O}(T^{1/2})$, which matches the regret of other contextual bandit algorithms in terms of total round number $T$. Experimental comparisons with other benchmark bandit algorithms on various data sets corroborate our theory.

Weitong Zhang, Dongruo Zhou, Lihong Li, Quanquan Gu• 2020

Related benchmarks

TaskDatasetResultRank
Contextual BanditMagicTelescope OpenML
Final Cumulative Regret2.01e+3
13
Contextual BanditsDisin
Cumulative Regret523.2
7
Contextual BanditsMNIST
Cumulative Regret965.8
7
Contextual BanditsMovieLens
Cumulative Regret1.58e+3
7
Contextual BanditsYelp
Cumulative Regret4.68e+3
7
Contextual BanditOpenML Adult T=10,000
Cumulative Regret (Mean)1.76e+3
7
Contextual BanditOpenML Mushroom (T=8124)
Mean Cumulative Regret115
7
Contextual BanditOpenML Shuttle T=10,000
Cumulative Regret (Mean)232
7
Contextual BanditCovertype (OpenML)
Final Cumulative Regret3.48e+3
6
Contextual BanditGasDrift OpenML
Final Cumulative Regret481.2
6
Showing 10 of 24 rows

Other info

Follow for update