Copycat CNN: Are random non-Labeled data enough to steal knowledge from black-box models?

作者:

Highlights:

• Simple, yet powerful, method to copy a black-box CNN model with random natural images.

• Some constraints are waived and copy attacks are performed with less information.

• Understanding copy attacks with random natural images.

• Throughout evaluation of copycat models created with random natural images.

摘要

•Simple, yet powerful, method to copy a black-box CNN model with random natural images.•Some constraints are waived and copy attacks are performed with less information.•Understanding copy attacks with random natural images.•Throughout evaluation of copycat models created with random natural images.

论文关键词:Deep learning,Convolutional neural network,Neural network attack,Stealing network knowledge,Knowledge distillation

论文评审过程:Received 16 July 2019, Revised 21 December 2020, Accepted 2 January 2021, Available online 16 January 2021, Version of Record 20 January 2021.

论文官网地址:https://doi.org/10.1016/j.patcog.2021.107830