Indexed by:
Abstract:
Federated learning addresses the 'data islands' issue resulting from privacy concerns, enabling multiple users to train a model collaboratively without sharing their private datasets. However, in the model training process of federated learning, effectively coordinating the communication between participants and server is essential to improve the training efficiency. Traditional synchronous and asynchronous federated learning algorithms cannot meet the requirements of convergence efficiency and precision stability at the same time. In order to balance algorithm efficiency and model stability, this paper proposes a model parameter update strategy for an enhanced asynchronous federated learning algorithm (EAFL). This strategy requires the server to store the latest global model parameters, the last round of global model parameters, and the update part of both, and judge the issued model type according to the sensitivity difference of the client. The client executes different local update processes according to different model types to improve the training efficiency and maintain the stability of the training accuracy. The experimental results show that the EAFL algorithm achieves better training results on MNIST and CIFAR-10 data sets than the traditional federated learning parameter update algorithm. This improvement in training results can provide new thoughts to solve computer vision tasks such as object detection with scattered devices. © 2023 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2023
Page: 9-15
Language: English
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 1
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 12
Affiliated Colleges: