Indexed by:
Abstract:
A totally asynchronous gradient algorithm, with fixed step size is proposedfor federated learning. A mathematical model is presented and a convergence result is established. The convergence result is based on the concept of macro iterations sequence. The interest of the contribution is to show that the asynchronous federated learning method converges when gradients of loss functions are updated by workers without order nor synchronization and with possible unbounded delays.
Keyword:
Reprint Author's Address:
Source :
2024 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS, IPDPSW 2024
Year: 2024
Page: 956-963
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: