Fooling deep neural detection networks with adaptive object-oriented adversarial perturbation

作者:

Highlights:

• A flexible, adaptive object-oriented adversarial strategy generates adversarial perturbations in fooling deep neural detection networks.

• The adaptive object-oriented adversarial method (AO2AM) reduces pixel modifications in generated adversarial examples.

• AO2AM outperforms adversarial attack methods based on whole-scale of input in fooling deep neural detection networks.

• The similarity between original and crafted adversarial samples is very high.

• A metric to evaluate the impact of fooling deep neural detection networks is proposed.

摘要

•A flexible, adaptive object-oriented adversarial strategy generates adversarial perturbations in fooling deep neural detection networks.•The adaptive object-oriented adversarial method (AO2AM) reduces pixel modifications in generated adversarial examples.•AO2AM outperforms adversarial attack methods based on whole-scale of input in fooling deep neural detection networks.•The similarity between original and crafted adversarial samples is very high.•A metric to evaluate the impact of fooling deep neural detection networks is proposed.

论文关键词:Object detection,Adversarial attack,Adaptive object-oriented perturbation

论文评审过程:Received 13 January 2020, Revised 31 August 2020, Accepted 16 February 2021, Available online 20 February 2021, Version of Record 26 February 2021.

论文官网地址:https://doi.org/10.1016/j.patcog.2021.107903