Query efficient black-box adversarial attack on deep neural networks

作者:

Highlights:

• We explore the flexible versions of NP-Attack, when combined with the surrogate models. Our method could show a better query efficiency, demonstrating that NP-Attack outperforms with or without surrogate models.

• We add some tiling tricks on NP-Attack to improve query efficiency. Moreover, we also design some ablation study experiments on tiling parameters.

• We also evaluate NP-Attack on adversarial defense models to further discuss the capability of our method. Extensive experiments on benchmark demonstrate that our NP-Attack still outperforms existing evolution strategy methods in these black-box attack tasks.

摘要

•We explore the flexible versions of NP-Attack, when combined with the surrogate models. Our method could show a better query efficiency, demonstrating that NP-Attack outperforms with or without surrogate models.•We add some tiling tricks on NP-Attack to improve query efficiency. Moreover, we also design some ablation study experiments on tiling parameters.•We also evaluate NP-Attack on adversarial defense models to further discuss the capability of our method. Extensive experiments on benchmark demonstrate that our NP-Attack still outperforms existing evolution strategy methods in these black-box attack tasks.

论文关键词:Black-box adversarial attack,Adversarial distribution,Query efficiency,Neural process

论文评审过程:Received 13 December 2021, Revised 23 August 2022, Accepted 6 September 2022, Available online 11 September 2022, Version of Record 15 September 2022.

论文官网地址:https://doi.org/10.1016/j.patcog.2022.109037