Lower bounds on individual sequence regret

作者:Eyal Gofer, Yishay Mansour

摘要

In this work we lower bound the individual sequence anytime regret of a large family of online algorithms. This bound depends on the quadratic variation of the sequence, \(Q_T\), and the learning rate. Nevertheless, we show that any learning rate that guarantees a regret upper bound of \(O(\sqrt{Q_T})\) necessarily implies an \(\varOmega (\sqrt{Q_T})\) anytime regret on any sequence with quadratic variation \(Q_T\). The algorithms we consider are online linear optimization forecasters whose weight vector at time \(t+1\) is the gradient of a concave potential function of cumulative losses at time t. We show that these algorithms include all linear Regularized Follow the Leader algorithms. We prove our result for the case of potentials with negative definite Hessians, and potentials for the best expert setting satisfying some natural regularity conditions. In the best expert setting, we give our result in terms of the translation-invariant relative quadratic variation. We apply our lower bounds to Randomized Weighted Majority and to linear cost Online Gradient Descent. We show that our analysis can be generalized to accommodate diverse measures of variation beside quadratic variation. We apply this generalized analysis to Online Gradient Descent with a regret upper bound that depends on the variance of losses.

论文关键词:Regret minimization, Online learning, Online linear optimization, Regret lower bounds, Regularized Follow the Leader

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10994-015-5531-y