Tracking the Best Disjunction

作者:Peter Auer, Manfred K. Warmuth

摘要

Littlestone developed a simple deterministic on-line learning algorithm for learning k-literal disjunctions. This algorithm (called \({WINNOW}\)) keeps one weight for each of then variables and does multiplicative updates to its weights. We develop a randomized version of \({WINNOW} \) and prove bounds for an adaptation of the algorithm for the case when the disjunction may change over time. In this case a possible target disjunction schedule \({\mathcal{T}} \) is a sequence of disjunctions (one per trial) and the shift size is the total number of literals that are added/removed from the disjunctions as one progresses through the sequence.

论文关键词:on-line learning, prediction, concept drift, \({WINNOW}\) , computational learning theory, amortized analysis

论文评审过程:

论文官网地址:https://doi.org/10.1023/A:1007472513967