Sparse Auto-encoder with Smoothed \(l_1\) Regularization

作者:Li Zhang, Yaping Lu, Bangjun Wang, Fanzhang Li, Zhao Zhang

摘要

Improving the performance on data representation of an auto-encoder could help to obtain a satisfying deep network. One of the strategies to enhance the performance is to incorporate sparsity into an auto-encoder. Fortunately, sparsity for the auto-encoder has been achieved by adding a Kullback–Leibler (KL) divergence term to the risk functional. In compressive sensing and machine learning, it is well known that the \(l_1\) regularization is a widely used technique which can induce sparsity. Thus, this paper introduces a smoothed \(l_1\) regularization instead of the mostly used KL divergence to enforce sparsity for auto-encoders. Experimental results show that the smoothed \(l_1\) regularization works better than the KL divergence.

论文关键词:Auto-encoder, Sparsity, KL divergence, Smoothed \(l_1\) regularization, Data representation

论文评审过程:

论文官网地址:https://doi.org/10.1007/s11063-017-9668-5