Fast hyperparameter tuning using Bayesian optimization with directional derivatives

作者:

Highlights:

摘要

In this paper we develop a Bayesian optimization based hyperparameter tuning framework inspired by statistical learning theory for classifiers. We utilize two key facts from PAC learning theory; the generalization bound will be higher for a small subset of data compared to the whole, and the highest accuracy for a small subset of data can be achieved with a simple model. We initially tune the hyperparameters on a small subset of training data using Bayesian optimization. While tuning the hyperparameters on the whole training data, we leverage the insights from the learning theory to seek more complex models. We realize this by using directional derivative signs strategically placed in the hyperparameter search space to seek a more complex model than the one obtained with small data. We demonstrate the performance of our method on the tasks of tuning the hyperparameters of several machine learning algorithms.

论文关键词:Bayesian optimization,Gaussian process,Hyperparameter tuning

论文评审过程:Received 22 March 2020, Revised 9 June 2020, Accepted 9 July 2020, Available online 10 July 2020, Version of Record 21 July 2020.

论文官网地址:https://doi.org/10.1016/j.knosys.2020.106247