Indexed by:
Abstract:
In order to overcome the defects of gradient descent (GD) algorithm which lead to slow convergence and easy to fall into local minima, this paper proposes an adaptive optimum steepest descent (AOSD) learning algorithm which is used for the recurrent radial basis function (RRBF) neural network. Compared with traditional GD algorithm, the adaptive learning rate is integrated into the AOSD learning algorithm in order to accelerate the convergence speed of training algorithm and improve the network performance of nonlinear system modeling. Several comparisons show that the proposed RRBF has faster convergence speed and better prediction performance.
Keyword:
Reprint Author's Address:
Email:
Source :
PROCEEDINGS OF THE 36TH CHINESE CONTROL CONFERENCE (CCC 2017)
ISSN: 2161-2927
Year: 2017
Page: 3942-3947
Language: English
Cited Count:
WoS CC Cited Count: 3
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 4
Affiliated Colleges: