Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Scale-Free Online Learning

About

We design and analyze algorithms for online linear optimization that have optimal regret and at the same time do not need to know any upper or lower bounds on the norm of the loss vectors. Our algorithms are instances of the Follow the Regularized Leader (FTRL) and Mirror Descent (MD) meta-algorithms. We achieve adaptiveness to the norms of the loss vectors by scale invariance, i.e., our algorithms make exactly the same decisions if the sequence of loss vectors is multiplied by any positive constant. The algorithm based on FTRL works for any decision set, bounded or unbounded. For unbounded decisions sets, this is the first adaptive algorithm for online linear optimization with a non-vacuous regret bound. In contrast, we show lower bounds on scale-free algorithms based on MD on unbounded domains.

Francesco Orabona, D\'avid P\'al• 2016

Related benchmarks

TaskDatasetResultRank
Online Conformal PredictionElectricity Demand
Marginal Coverage95
6
Online Conformal PredictionSinusoid Dataset synthetic
Marginal Coverage95
6
Online Conformal PredictionAMZN
Marginal Coverage94.7
6
Online Conformal PredictionGOOGL (test)
Marginal Coverage94.8
6
Online Conformal PredictionAXP
Marginal Coverage94.8
6
Online Conformal PredictionAAPL
Marginal Coverage94.5
6
Online Conformal PredictionStationary Dataset synthetic
Marginal Coverage94.6
6
Online Conformal PredictionMix Dataset synthetic s (test)
Marginal Coverage92.7
6
Online Conformal PredictionAXP (test)
Marginal Coverage94.8
4
Showing 9 of 9 rows

Other info

Follow for update