Indexed by:
Abstract:
Distributed modeling and monitoring are commonly used in modern industries. Because of the ever-growing demands for energy conservation and carbon dioxide emission reduction, attempts have been made to improve the energy efficiency of distributed systems. Here, we propose a hierarchical democratized learning framework to optimize distributed computation and communication consumptions. First, we split edge devices into logical learning groups which cooperate with the regional task-related server. Then, each learning group performs hierarchical learning through the generalization and specialization procedures. Through hierarchical clustering, the computation tasks cooperate in a more efficient way with reduced communication requirements at the same time. We conducted experiments on the widely used datasets, FMNIST and CIFAR-10, demonstrating lower computation and communication costs of hierarchical democratic modeling compared with existing federated analytics models. © 2023 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2023
Page: 279-284
Language: English
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 7
Affiliated Colleges: