Reinforced SVM method and memorization mechanisms

作者:

Highlights:

摘要

The paper is devoted to two problems: (1) reinforcement of SVM algorithms, and (2) justification of memorization mechanisms for generalization.(1) Current SVM algorithm was designed for the case when the risk for the set of nonnegative slack variables is defined by l1 norm. In this paper, along with that classical l1 norm, we consider risks defined by l2 norm and l∞ norm. Using these norms, we formulate several modifications of the existing SVM algorithm and show that the resulting modified SVM algorithms can improve (sometimes significantly) the classification performance.(2) Generalization ability of existing learning algorithms is usually explained by arguments involving uniform convergence of empirical losses to the corresponding expected losses over a given set of functions. However, along with bounds for uniform convergence of empirical losses to the expected losses, the VC theory also provides bounds for relative uniform convergence. These bounds lead to a more accurate estimate of the expected loss. Advanced methods of estimating of expected risk of error have to leverage these bounds, which also support mechanisms of training data memorization, which, as the paper demonstrates, can improve classification performance.

论文关键词:Support vector machine,classification,learning theory,VC dimension,kernel function,Reproducing Kernel Hilbert space

论文评审过程:Received 17 September 2020, Revised 18 March 2021, Accepted 5 May 2021, Available online 19 May 2021, Version of Record 8 June 2021.

论文官网地址:https://doi.org/10.1016/j.patcog.2021.108018