Indexed by:
Abstract:
The vulnerability of deep learning to adversarial attacks has brought many security risks to its development. However, the currently proposed adversarial attack detection methods are ineffective in defending large-size images. Studies have shown that detection methods based on steganalysis have apparent advantages in defense of large-scale adversarial examples. Still, their detection results are very dependent on the quality of manually extracted features. This paper proposes an end-to-end convolutional neural network (CNN) model based on steganalysis for adversarial examples detection. We found that adversarial attacks tend to embed perturbations in high-texture regions. To focus more on vulnerable areas during feature learning, we use the original image as information to learn the attention distribution, which improves the model's accuracy. And an adversarial loss function is proposed to fine-tune the model so that the normal images and adversarial examples are more separated in the spatial distribution, reducing the probability of misclassifying normal images. Experiments show that this model has a high defense effect against the current state-of-the-art attack methods. © 2022 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2022
Page: 55-60
Language: English
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 2
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 7
Affiliated Colleges: