Indexed by:
Abstract:
Federated learning (FL) has emerged as a promising paradigm for privacy-preserving collaborative learning, enabling multiple devices to jointly train a global model without sharing their raw data. However, the bias in FL training significantly reduces its performance. This poster presents a novel FL algorithm to counteract bias for performance improvement. First, we provide a global perspective for analyzing the causes of bias, data heterogeneity and transmission probability. Then, we propose a method that introduces a regularized local training method and a reweighted aggregation strategy to jointly mitigate bias. Through extensive experiments on real-world datasets, we demonstrate that our method outperforms various baseline FL methods in terms of convergence speed and accuracy. © 2024 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2024
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 7
Affiliated Colleges: