Indexed by:
Abstract:
In wireless sensor network (WSN), the centralized learning method by transmitting all training samples scattered across different sensor nodes to a centralized data center to train classifier will significantly increase the communication cost. To decrease the communication cost in transmitting training samples, a distributed learning method for kernel minimum squared error (KMSE) by incorporating L1 regularized term was studied, which just relies on in-network processing between single-hop neighboring nodes. Each node obtains its local optimum sparse model by constructing the optimization problem of L1 regularized KMSE based on its local training samples and solving it using parallel projections and alternating the direction method of multipliers, then a consistent model is achieved on all nodes by using the global average consensus algorithm. For carrying out this method, a new distributed training algorithm for L1-regularized kernel minimum squared error based on parallel projections (L1-DKMSE-PP) was proposed. Simulations show that L1-DKMSE-PP can obtain almost the same prediction accuracy as that of the centralized counterpart and a sparser model, and more importantly, it can significantly reduce the communication cost. © 2016, Editorial Department of Journal of Beijing University of Posts and Telecommunications. All right reserved.
Keyword:
Reprint Author's Address:
Email:
Source :
Journal of Beijing University of Posts and Telecommunications
ISSN: 1007-5321
Year: 2016
Issue: 3
Volume: 39
Page: 80-84
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 8
Affiliated Colleges: