On approximating weighted sums with exponentially many terms

作者:

Highlights:

摘要

Multiplicative weight-update algorithms such as Winnow and Weighted Majority have been studied extensively due to their on-line mistake bounds’ logarithmic dependence on N, the total number of inputs, which allows them to be applied to problems where N is exponential. However, a large N requires techniques to efficiently compute the weighted sums of inputs to these algorithms. In special cases, the weighted sum can be exactly computed efficiently, but for numerous problems such an approach seems infeasible. Thus we explore applications of Markov chain Monte Carlo (MCMC) methods to estimate the total weight. Our methods are very general and applicable to any representation of a learning problem for which the inputs to a linear learning algorithm can be represented as states in a completely connected, untruncated Markov chain. We give theoretical worst-case guarantees on our technique and then apply it to two problems: learning DNF formulas using Winnow, and pruning classifier ensembles using Weighted Majority. We then present empirical results on simulated data indicating that in practice, the time complexity is much better than what is implied by our worst-case theoretical analysis.

论文关键词:Markov chain Monte Carlo approximation,Winnow,Weighted Majority,Multiplicative weight updates,Perceptron,DNF learning,Boosting

论文评审过程:Received 28 March 2003, Revised 8 January 2004, Available online 25 March 2004.

论文官网地址:https://doi.org/10.1016/j.jcss.2004.01.006