Indexed by:
Abstract:
Deep learning has achieved great success in many fields, such as image classification and target detection. Adding small disturbance which is hard to be detected by the human eyes to original images can make the neural network output error results with high confidence. An image after adding small disturbance is an adversarial example. The existence of adversarial examples brings a huge security problem to deep learning. In order to effectively defend against adversarial examples attacks, an adversarial example defense method based on image reconstruction is proposed by analyzing the existing adversarial examples attack methods and defense methods. Our data set is based on ImageNet 1k data set, and some filtering and expansion are carried out. Four attack modes, FGSM, BIM, DeepFool and C&W are selected to test the defense method. Based on the EDSR network, multi-scale feature fusion module and subspace attention module are added. By capturing the global correlation information of the image, the disturbance can be removed, while the image texture details can be better preserved, and the defense performance can be improved. The experimental results show that the proposed method has good defense effect. © 2023 SPIE.
Keyword:
Reprint Author's Address:
Email:
Source :
ISSN: 0277-786X
Year: 2023
Volume: 12782
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: