Feature space approximation for kernel-based supervised learning

作者:

Highlights:

摘要

We propose a method for the approximation of high- or even infinite-dimensional feature vectors, which play an important role in supervised learning. The goal is to reduce the size of the training data, resulting in lower storage consumption and computational complexity. Furthermore, the method can be regarded as a regularization technique, which improves the generalizability of learned target functions. We demonstrate significant improvements in comparison to the computation of data-driven predictions involving the full training data set. The method is applied to classification and regression problems from different application areas such as image recognition, system identification, and oceanographic time series analysis.

论文关键词:68Q27,68Q32,68T09,Supervised learning,Kernel-based methods,Feature spaces,Dimensionality reduction,System identification

论文评审过程:Received 8 December 2020, Revised 9 February 2021, Accepted 4 March 2021, Available online 20 March 2021, Version of Record 23 March 2021.

论文官网地址:https://doi.org/10.1016/j.knosys.2021.106935