Indexed by:
Abstract:
In this paper, a novel policy iteration adaptive dynamic programming (ADP) algorithm is presented which is called "local policy iteration ADP algorithm" to obtain the optimal control for discrete stochastic processes. In the proposed local policy iteration ADP algorithm, the iterative decision rules are updated in a local space of the whole state space. Hence, we can significantly reduce the computational burden for the CPU in comparison with the conventional policy iteration algorithm. By analyzing the convergence properties of the proposed algorithm, it is shown that the iterative value functions are monotonically nonincreasing. Besides, the iterative value functions can converge to the optimum in a local policy space. In addition, this local policy space will be described in detail for the first time. Under a few weak constraints, it is also shown that the iterative value function will converge to the optimal performance index function of the global policy space. Finally, a simulation example is presented to validate the effectiveness of the developed method.
Keyword:
Reprint Author's Address:
Source :
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS
ISSN: 2168-2216
Year: 2020
Issue: 11
Volume: 50
Page: 3972-3985
8 . 7 0 0
JCR@2022
ESI Discipline: ENGINEERING;
ESI HC Threshold:115
Cited Count:
WoS CC Cited Count: 13
SCOPUS Cited Count: 16
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 6
Affiliated Colleges: