Bayesian compression for dynamically expandable networks

作者:

Highlights:

• A compact model structure with preserving the accuracy via sparsity inducing priors, which leads to fewer neurons at each hidden layer in the network, equivalently fewer parameters.

• Dynamically expands network capacity with only the necessary number of neurons by employing sparsity inducing priors for the added neurons, so as to increase the network capacity when necessary.

• Variational Bayesian approximation for the model parameters with parameter uncertainty.

摘要

•A compact model structure with preserving the accuracy via sparsity inducing priors, which leads to fewer neurons at each hidden layer in the network, equivalently fewer parameters.•Dynamically expands network capacity with only the necessary number of neurons by employing sparsity inducing priors for the added neurons, so as to increase the network capacity when necessary.•Variational Bayesian approximation for the model parameters with parameter uncertainty.

论文关键词:Bayesian compression,DEN,Continual learning,Selective retraining,Dynamically expands network,Semantic drift

论文评审过程:Received 13 January 2020, Revised 24 October 2020, Accepted 16 August 2021, Available online 18 August 2021, Version of Record 26 August 2021.

论文官网地址:https://doi.org/10.1016/j.patcog.2021.108260