Indexed by:
Abstract:
A totally asynchronous gradient algorithm with fixed step size is proposed for federated learning. A mathematical model is presented and a convergence result is established. The convergence result is based on the concept of macro iterations sequence. The interest of the contribution is to show that the asynchronous federated learning method converges when gradients of loss functions are updated by workers without order nor synchronization and with possible unbounded delays. © 2024 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2024
Page: 956-963
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 8
Affiliated Colleges: