Unsupervised bin-wise pre-training: A fusion of information theory and hypergraph

作者:

Highlights:

• Novel pre-training model is proposed to improve generalization & rate of convergence.

• New parameter updation is introduced that performs both optimization & regularization

• K-helly property of hypergraph is employed to restraint updation during pre-training.

• Three benchmark datasets are used to evaluate the supremacy of the proposed model.

摘要

•Novel pre-training model is proposed to improve generalization & rate of convergence.•New parameter updation is introduced that performs both optimization & regularization•K-helly property of hypergraph is employed to restraint updation during pre-training.•Three benchmark datasets are used to evaluate the supremacy of the proposed model.

论文关键词:Deep neural network,Mutual information,Information theory,Partial information decomposition,Hypergraph

论文评审过程:Received 2 August 2019, Revised 9 February 2020, Accepted 10 February 2020, Available online 13 February 2020, Version of Record 4 April 2020.

论文官网地址:https://doi.org/10.1016/j.knosys.2020.105650