Indexed by:
Abstract:
Federated Learning allows multiple clients to train local models and aggregate them on the server side. The client is invisible to the shared global model generated by the server, which provides an opportunity for malicious attackers to utilize the inherent vulnerability of federated learning to initiate data leakage attacks. Existing attack techniques are largely client-based and focus on inferring model parameters directly, but do not work for server-based attacks, mainly due to differences in their ability to generalize attacks. Yet few robust data leakage attacks toward federated learning vulnerability have been developed on the server side. To address the above problem, we propose MOFDRNet, a Multi-Objective Fake Data Regression Network model that integrates the loss function and multiple metrics strategies. The key idea is to deploy a malicious attack model on the server with the purpose of generating fake data and labels and continuously approximating the shared gradients between clients and the server, thereby recovering clients' private data. Experimental results demonstrate that the MOFDRNet model has significant advantages in implementing data leakage attacks. Finally, we also discuss the differential privacy defense approach in this study. © 2025 John Wiley & Sons Ltd.
Keyword:
Reprint Author's Address:
Email:
Source :
Concurrency and Computation: Practice and Experience
ISSN: 1532-0626
Year: 2025
Issue: 9-11
Volume: 37
2 . 0 0 0
JCR@2022
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 6
Affiliated Colleges: