Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Online convex optimization in the bandit setting: gradient descent without a gradient

About

We consider a the general online convex optimization framework introduced by Zinkevich. In this setting, there is a sequence of convex functions. Each period, we must choose a signle point (from some feasible set) and pay a cost equal to the value of the next function on our chosen point. Zinkevich shows that, if the each function is revealed after the choice is made, then one can achieve vanishingly small regret relative the best single decision chosen in hindsight. We extend this to the bandit setting where we do not find out the entire functions but rather just their value at our chosen point. We show how to get vanishingly small regret in this setting. Our approach uses a simple approximation of the gradient that is computed from evaluating a function at a single (random) point. We show that this estimate is sufficient to mimic Zinkevich's gradient descent online analysis, with access to the gradient (only being able to evaluate the function at a single point).

Abraham D. Flaxman, Adam Tauman Kalai, H. Brendan McMahan• 2004

Related benchmarks

TaskDatasetResultRank
Constrained Online SelectionMVE-S (ε = 0.06) (steady-state)
Oracle Reward31.68
3
Constrained Online SelectionGerman ε = 0.02, a ∈ [0.30, 0.99] (steady-state)
Oracle Reward0.4656
3
Constrained Online SelectionCOMPAS (ε = 0.03, a ∈ [0.30, 0.99]) (steady-state)
Oracle Reward0.7665
3
Sequential Decision MakingMVE-S tail mean over the last 200 iterations
Reward0.0657
3
Sequential Decision MakingGerman tail mean over the last 200 iterations
Reward0.1127
3
Sequential Decision MakingCOMPAS
Reward0.1377
3
Showing 6 of 6 rows

Other info

Follow for update