Indexed by:
Abstract:
In recent years, federated learning(FL) has made significant progress, with a primary focus on privacy protection, model optimization, and enhancing system security and robustness. Federated Learning can be categorized into two main types: centralized federated learning(CFL) and decentralized federated learning(DFL). Traditional CFL methods rely on a central server to aggregate model updates from clients, while DFL, as an improvement over CFL, aims to directly exchange model updates between clients in a peer-to-peer manner, thereby eliminating the dependence on a central server and reducing communication burdens. In this work, we built a deep neural network model within the context of DFL, and considering the issue of data heterogeneity often arising in DFL, we further proposed the comprehensive factors for neighbor selection(CFNS) method. This method selects neighboring clients with similar data distributions by considering multiple factors, with the assistance of a graph structure in Federated Learning. Moreover, although DFL can reduce risks such as single-point privacy breaches to some extent, privacy protection still requires further attention. To address this, we introduced a local differential privacy(LDP) module during client communication, further enhancing DFL's privacy protection capabilities. Experimental results show that our method can achieve good performance while ensuring privacy security, and the results outperform the provided baseline methods. © 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2025
Page: 21-28
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 9
Affiliated Colleges: