Indexed by:
Abstract:
In this paper, a neural-network-based policy learning method is established to solve robust stabilization for a class of continuous-time nonlinear systems with both internal dynamic uncertainties and input matrix uncertainties. First, the robust stabilization problem is converted to an optimal control problem by choosing an appropriate cast function and proving system stability. Then, in order to solve the Hamilton-Jacobi-Bellman equation, a policy iteration algorithm is employed by constructing and training a critic neural network. The approximate optimal control policy can be obtained by this algorithm, and the solution of the robust stabilization can he derived as well. Finally, a numerical example and an experimental simulation are provided to verify the availability of the proposed strategy.
Keyword:
Reprint Author's Address:
Source :
2020 CHINESE AUTOMATION CONGRESS (CAC 2020)
ISSN: 2688-092X
Year: 2020
Page: 987-992
Language: English
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 8
Affiliated Colleges: