Improving adversarial robustness of deep neural networks by using semantic information

作者:

Highlights:

摘要

The vulnerability of deep neural networks (DNNs) to adversarial attack, which is an attack that can mislead state-of-the-art classifiers into making an incorrect classification with high confidence by deliberately perturbing the original inputs, raises concerns about the robustness of DNNs to such attacks. Adversarial training, which is the main heuristic method for improving adversarial robustness and the first line of defense against adversarial attacks, requires many sample-by-sample calculations to increase training size and is usually insufficiently strong for an entire network. This paper provides a new perspective on the issue of adversarial robustness, one that shifts the focus from the network as a whole to the critical part of the region close to the decision boundary corresponding to a given class. From this perspective, we propose a method to generate a single but image-agnostic adversarial perturbation that carries the semantic information implying the directions to the fragile parts on the decision boundary and causes inputs to be misclassified as a specified target. We call the adversarial training based on such perturbations “region adversarial training” (RAT), which resembles classical adversarial training but is distinguished in that it reinforces the semantic information missing in the relevant regions. Experimental results on the MNIST and CIFAR-10 datasets show that this approach greatly improves adversarial robustness even when a very small dataset from the training data is used; moreover, it can defend against fast gradient sign method, universal perturbation, projected gradient descent, and Carlini and Wagner adversarial attacks, which have a completely different pattern from those encountered by the model during retraining.

论文关键词:Adversarial robustness,Semantic information,Region adversarial training,Targeted universal perturbations

论文评审过程:Received 28 July 2020, Revised 6 May 2021, Accepted 11 May 2021, Available online 13 May 2021, Version of Record 24 May 2021.

论文官网地址:https://doi.org/10.1016/j.knosys.2021.107141