Indexed by:
Abstract:
Federated Learning (FL), a secure and emerging distributed learning paradigm, has garnered significant interest in the Internet of Things (IoT) domain. However, it remains vulnerable to adversaries who may compromise privacy and integrity. Previous studies on privacy-preserving FL (PPFL) have demonstrated limitations in client model personalization and resistance to poisoning attacks, including Byzantine and backdoor attacks. In response, we propose a novel PPFL framework, FedRectify, that employs a personalized dual-layer approach through the deployment of Trusted Execution Environments and an interactive training strategy. This strategy facilitates the learning of personalized client features via private and shared layers. Furthermore, to improve model's robustness to poisoning attacks, we introduce a novel aggregation method that employs clustering to filter out outlier model parameters and robust regression to assess the confidence of cluster members, thereby rectifying poisoned parameters. We theoretically prove the convergence of FedRectify and empirically validate its performance through extensive experiments. The results demonstrate that FedRectify converges 1.47-2.63 times faster than state-of-the-art methods when countering Byzantine attacks. Moreover, it can rapidly reduce the attack success rate to a low level between 10% and 40% in subsequent rounds when confronting bursty backdoor attacks.
Keyword:
Reprint Author's Address:
Email:
Source :
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY
ISSN: 1556-6013
Year: 2024
Volume: 19
Page: 8845-8859
6 . 8 0 0
JCR@2022
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 15
Affiliated Colleges: