Indexed by:
Abstract:
In the machine learning fleld, the core technique of artiflcial intelligence, reinforcement learning is a class of strategies focusing on learning during the interaction process between machine and environment. As an important branch of reinforcement learning, the adaptive critic technique is closely related to dynamic programming and optimization design. In order to efiectively solve optimal control problems of complex dynamical systems, the adaptive dynamic programming approach was proposed by combining adaptive critic, dynamic programming with artiflcial neural networks and has been attracted extensive attention. Particularly, great progress has been obtained on robust adaptive critic control design with uncertainties and disturbances. Now, it has been regarded as a necessary outlet to construct intelligent learning systems and achieve true brain-like intelligence. This paper presents a comprehensive survey on the learning-based robust adaptive critic control theory and methods, including self-learning robust stabilization, adaptive trajectory tracking, event-driven robust control, and adaptive H∞ control design. Therein, it covers a general analysis for adaptive critic systems in terms of stability, convergence, optimality, and robustness. In addition, considering novel techniques such as artiflcial intelligence, big data, deep learning, and knowledge automation, it also discusses future prospects of robust adaptive critic control. Copyright © 2019 Acta Automatica Sinica. All rights reserved.
Keyword:
Reprint Author's Address:
Email:
Source :
Acta Automatica Sinica
ISSN: 0254-4156
Year: 2019
Issue: 6
Volume: 45
Page: 1031-1043
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 22
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 9
Affiliated Colleges: