Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Efficient First-Order Contextual Bandits: Prediction, Allocation, and Triangular Discrimination

About

A recurring theme in statistical learning, online learning, and beyond is that faster convergence rates are possible for problems with low noise, often quantified by the performance of the best hypothesis; such results are known as first-order or small-loss guarantees. While first-order guarantees are relatively well understood in statistical and online learning, adapting to low noise in contextual bandits (and more broadly, decision making) presents major algorithmic challenges. In a COLT 2017 open problem, Agarwal, Krishnamurthy, Langford, Luo, and Schapire asked whether first-order guarantees are even possible for contextual bandits and -- if so -- whether they can be attained by efficient algorithms. We give a resolution to this question by providing an optimal and efficient reduction from contextual bandits to online regression with the logarithmic (or, cross-entropy) loss. Our algorithm is simple and practical, readily accommodates rich function classes, and requires no distributional assumptions beyond realizability. In a large-scale empirical evaluation, we find that our approach typically outperforms comparable non-first-order methods. On the technical side, we show that the logarithmic loss and an information-theoretic quantity called the triangular discrimination play a fundamental role in obtaining first-order guarantees, and we combine this observation with new refinements to the regression oracle reduction framework of Foster and Rakhlin. The use of triangular discrimination yields novel results even for the classical statistical learning model, and we anticipate that it will find broader use.

Dylan J. Foster, Akshay Krishnamurthy• 2021

Related benchmarks

TaskDatasetResultRank
Policy Optimization in CMAB1062 2
Mean Diff from Supervised (PV-loss)0.0028
5
Policy Optimization in CMAB1073_2
Mean PV-loss Difference-0.0546
5
Policy Optimization in CMAB729 2
Mean PV-Loss Difference0.0091
5
Policy Optimization in CMAB874 2
PV-loss (Mean Difference)0.054
5
Policy Optimization in CMAB1006 2
Mean Diff (PV-loss)0.1227
5
Policy Optimization in CMAB1015 2
PV-Loss Difference-0.0042
5
Policy Optimization in CMAB339_3
Mean Diff from Supervised (PV-loss)0.0528
5
Policy Optimization in CMAB835 2
Mean Diff PV-loss0.2
5
Policy Optimization in CMAB1012 2
PV-loss (Mean Diff)0.0438
5
Policy Optimization in CMAB1084 3
Mean Diff from Supervised (PV-loss)0.0828
5
Showing 10 of 24 rows

Other info

Follow for update