Learning undirected graphical models using persistent sequential Monte Carlo

作者:Hanchen Xiong, Sandor Szedmak, Justus Piater

摘要

Along with the popular use of algorithms such as persistent contrastive divergence, tempered transition and parallel tempering, the past decade has witnessed a revival of learning undirected graphical models (UGMs) with sampling-based approximations. In this paper, based upon the analogy between Robbins-Monro’s stochastic approximation procedure and sequential Monte Carlo (SMC), we analyze the strengths and limitations of state-of-the-art learning algorithms from an SMC point of view. Moreover, we apply the rationale further in sampling at each iteration, and propose to learn UGMs using persistent sequential Monte Carlo (PSMC). The whole learning procedure is based on the samples from a long, persistent sequence of distributions which are actively constructed. Compared to the above-mentioned algorithms, one critical strength of PSMC-based learning is that it can explore the sampling space more effectively. In particular, it is robust when learning rates are large or model distributions are high-dimensional and thus multi-modal, which often causes other algorithms to deteriorate. We tested PSMC learning, comparing it with related methods, on carefully designed experiments with both synthetic and real-world data. Our empirical results demonstrate that PSMC compares favorably with the state of the art by consistently yielding the highest (or among the highest) likelihoods. We also evaluated PSMC on two practical tasks, multi-label classification and image segmentation, in which PSMC displays promising applicability by outperforming others.

论文关键词:Sequential Monte Carlo, Maximum likelihood learning, Undirected graphical models

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10994-016-5564-x