Indexed by:
Abstract:
Broad learning system based on neural network (BLS-NN) has poor efficiency for small data modeling with various dimensions. Tree-based BLS (TBLS) is designed for small data modeling by introducing nondifferentiable modules and an ensemble strategy to the traditional broad learning system (BLS). TBLS replaces the neurons of BLS with the tree modules to map the input data. Moreover, we present three new TBLS variant methods and their incremental learning implementations, which are motivated by deep, broad, and ensemble learning. Their major distinction is reflected in the incremental learning strategies based on: 1) mean square error (mse); 2) pseudo-inverse; and 3) pseudo-inverse theory and stack representation. Therefore, this study further explores the domain of BLS based on the nondifferentiable modules. The simulations are compared with some state-of-the-art (SOTA) BLS-NN and tree methods under high-, medium-, and low-dimensional benchmark datasets. Results show that the proposed method outperforms the BLS-NN, and the modeling accuracy is remarkably improved with the small training data of the proposed TBLS. IEEE
Keyword:
Reprint Author's Address:
Email:
Source :
IEEE Transactions on Neural Networks and Learning Systems
ISSN: 2162-237X
Year: 2022
Issue: 7
Volume: 35
Page: 1-15
1 0 . 4
JCR@2022
1 0 . 4 0 0
JCR@2022
ESI Discipline: COMPUTER SCIENCE;
ESI HC Threshold:46
JCR Journal Grade:1
CAS Journal Grade:1
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 10
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 7
Affiliated Colleges: