Knowledge distillation methods for efficient unsupervised adaptation across multiple domains

作者:

Highlights:

• Domain adaptation and compression are done at the same time.

• Models do not need to be trained on source dataset.

• Generalized for multiple target domains

• Finals models are highly compressed and accurate for target domains.

摘要

•Domain adaptation and compression are done at the same time.•Models do not need to be trained on source dataset.•Generalized for multiple target domains•Finals models are highly compressed and accurate for target domains.

论文关键词:Deep learning,Convolutional NNs,Knowledge distillation,Unsupervised domain adaptation,CNN acceleration and compression

论文评审过程:Received 4 September 2020, Revised 14 December 2020, Accepted 30 December 2020, Available online 6 January 2021, Version of Record 19 February 2021.

论文官网地址:https://doi.org/10.1016/j.imavis.2021.104096