Simpler PAC-Bayesian bounds for hostile data

作者:Pierre Alquier, Benjamin Guedj

摘要

PAC-Bayesian learning bounds are of the utmost interest to the learning community. Their role is to connect the generalization ability of an aggregation distribution \(\rho \) to its empirical risk and to its Kullback-Leibler divergence with respect to some prior distribution \(\pi \). Unfortunately, most of the available bounds typically rely on heavy assumptions such as boundedness and independence of the observations. This paper aims at relaxing these constraints and provides PAC-Bayesian learning bounds that hold for dependent, heavy-tailed observations (hereafter referred to as hostile data). In these bounds the Kullack-Leibler divergence is replaced with a general version of Csiszár’s f-divergence. We prove a general PAC-Bayesian bound, and show how to use it in various hostile settings.

论文关键词:PAC-Bayesian theory, Dependent and unbounded data, Oracle inequalities, f-divergence

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10994-017-5690-0