Indexed by:
Abstract:
The PID (Proportional-Integral-Derivative) controller has been broadly applied in many control engineering tasks due to its simplicity and fast computation as the model-free low-level control strategy. However, it still suffers from instability due to its feedback mechanism, especially in the complex self-driving driving task, e.g., the lane following task. Traditional approaches to this problem include classical PID tuning and expert system-based PID tuning methods but are not suitable due to the low sample efficiency in the uncertain environment which is not fully known. In this paper, we proposed Q learning-based PID (Q-PID) algorithm to solve the problem. In the algorithm, the policy of the optimal parameters of PID are learned via incremental exploration-exploitation procedure, i.e., learn the approximated Q-value function with an experience replay mechanism and calculate the optimal policy by maximizing the Q-function. The simulation results in the lane following task demonstrate the feasibility of the proposed algorithm. © 2022 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2022
Page: 38-42
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 5
Affiliated Colleges: