Indexed by:
Abstract:
Federated learning is a new privacy protection framework for machine learning. The central server aggregates multiple participants to decentralized optimized parameters, then distributes the generated model to the client, and finally converges the global model. The model obtained by performance approaching centralized data training is trained under the condition that the data is not leave local. However, many studies have shown that this centralized federation system is vulnerable to confidentiality attacks by "honest but curious" attackers, using the gradient parameter information transmitted during federation model training to carry out reconstruction attacks or inference attacks, obtain the privacy data of participants or deduce some member information, which poses a severe challenge to the privacy protection of federated learning. In this paper, a hybrid defense strategy based on confusion self-encoder combined with localized differential privacy is proposed. on the one hand, the labels of the participants' local data are confused through the self-encoder network, so as to cut off the relationship between the gradient information and the original data. On the other hand, the localized differential privacy mechanism is used to disturb the transmitted gradient parameter information, and a model performance loss constraint mechanism is designed to reduce the impact of noise addition on the model performance. Experiments show that the hybrid defense strategy proposed in this paper can effectively resist reconstruction attacks and inference attacks in the process of federated learning model training, and achieve a better balance among computing overhead, model performance and privacy security. © 2023 SPIE.
Keyword:
Reprint Author's Address:
Email:
Source :
ISSN: 0277-786X
Year: 2023
Volume: 12717
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 4
Affiliated Colleges: