Learning with stochastic inputs and adversarial outputs

作者:

Highlights:

摘要

Most of the research in online learning is focused either on the problem of adversarial classification (i.e., both inputs and labels are arbitrarily chosen by an adversary) or on the traditional supervised learning problem in which samples are independent and identically distributed according to a stationary probability distribution. Nonetheless, in a number of domains the relationship between inputs and outputs may be adversarial, whereas input instances are i.i.d. from a stationary distribution (e.g., user preferences). This scenario can be formalized as a learning problem with stochastic inputs and adversarial outputs. In this paper, we introduce this novel stochastic–adversarial learning setting and we analyze its learnability. In particular, we show that in a binary classification problem over a horizon of n rounds, given a hypothesis space H with finite VC-dimension, it is possible to design an algorithm that incrementally builds a suitable finite set of hypotheses from H used as input for an exponentially weighted forecaster and achieves a cumulative regret of order O(nVC(H)logn) with overwhelming probability. This result shows that whenever inputs are i.i.d. it is possible to solve any binary classification problem using a finite VC-dimension hypothesis space with a sub-linear regret independently from the way labels are generated (either stochastic or adversarial). We also discuss extensions to multi-class classification, regression, learning from experts and bandit settings with stochastic side information, and application to games.

论文关键词:Online learning,Hybrid stochastic–adversarial learning

论文评审过程:Received 24 February 2010, Revised 2 March 2011, Accepted 22 December 2011, Available online 21 January 2012.

论文官网地址:https://doi.org/10.1016/j.jcss.2011.12.027