Indexed by:
Abstract:
Parallelization is widely employed to improve the exploration ability of controllers. However, it is rare to provide a lightweight scheme for reducing homogeneous policies with theoretical guarantees. This article is concerned with a novel parallel scheme for solving optimal control problems. In brief, we design a novel global indicator that inherits the theoretical guarantees of a class of iterative reinforcement learning algorithms. By generating a tentative function, the global indicator can guide and communicate with parallel controllers to accelerate the learning process. Using two typical exploration policies, the novel parallel scheme can rapidly compress the neighborhood of the optimal cost function. Besides, two parallel algorithms based on value iteration and Q-learning are established to improve the data efficiency through different extensions. Finally, two benchmark problems are presented to demonstrate the learning effectiveness of the novel parallel scheme.
Keyword:
Reprint Author's Address:
Email:
Source :
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS
ISSN: 2168-2216
Year: 2024
Issue: 10
Volume: 54
Page: 6320-6331
8 . 7 0 0
JCR@2022
Cited Count:
WoS CC Cited Count: 2
SCOPUS Cited Count: 3
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: