Deep quantization generative networks

作者:

Highlights:

• This is a pioneering work exploring quantization to accelerate and compress deep convolutional generation models.

• Analyses and experiments suggest the importance of maintaining sufficient information for activation quantization.

• The proposed deep quantization generative network (DQGN) quantizes both network weights and activations to low-bits.

• Experiments on VAEs, GANs, style transfer, and super-resolution demonstrate the effectiveness of the proposed DQGN.

摘要

•This is a pioneering work exploring quantization to accelerate and compress deep convolutional generation models.•Analyses and experiments suggest the importance of maintaining sufficient information for activation quantization.•The proposed deep quantization generative network (DQGN) quantizes both network weights and activations to low-bits.•Experiments on VAEs, GANs, style transfer, and super-resolution demonstrate the effectiveness of the proposed DQGN.

论文关键词:Compression,Acceleration,Generative models,Network quantization

论文评审过程:Received 10 July 2019, Revised 24 February 2020, Accepted 12 March 2020, Available online 14 March 2020, Version of Record 5 June 2020.

论文官网地址:https://doi.org/10.1016/j.patcog.2020.107338