(AD)2: Adversarial domain adaptation to defense with adversarial perturbation removal

作者:

Highlights:

• We propose a modularized defense framework, which detects and removes adversarial perturbations from detected adversarial examples.

• We analyze the impact of reconstruction error metric on accuracy of generative detection method and prove its validation.

• Experimental results demonstrate our method can mitigate the odds between accuracy and robustness for deep neural networks."?>

摘要

•We propose a modularized defense framework, which detects and removes adversarial perturbations from detected adversarial examples.•We analyze the impact of reconstruction error metric on accuracy of generative detection method and prove its validation.•Experimental results demonstrate our method can mitigate the odds between accuracy and robustness for deep neural networks."?>

论文关键词:Deep learning,Adversarial example,Domain adaptation

论文评审过程:Received 1 January 2020, Revised 31 August 2021, Accepted 4 September 2021, Available online 5 September 2021, Version of Record 10 September 2021.

论文官网地址:https://doi.org/10.1016/j.patcog.2021.108303