Indexed by:
Abstract:
The autonomous control of aircraft poses a challenging high-dimensional continuous control problem, extensively utilized in drones, autonomous driving systems, and flight simulators. Reinforcement Learning (RL), especially with the latest advancements in deep neural networks, empowers agents to excel in complex continuous control tasks. The research employed the RL framework to model aircraft heading and altitude control as a Markov decision process, simulated via JSBSim. Leveraging the Proximal Policy Optimization (PPO) algorithm alongside a tailored reward function, the agent adeptly learns to attain target altitude and heading at a designated speed through training with an exhaustive set of state variables. Efforts were made to trim down the number of state variables, pinpointing a refined set highly linked to altitude control. This streamlined selection maintains performance while slashing computational complexity and boosting convergence speed. It offers a potent approach for autonomous aircraft control with limited state information. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.
Keyword:
Reprint Author's Address:
Email:
Source :
ISSN: 1876-1100
Year: 2025
Volume: 1326 LNEE
Page: 560-570
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 9
Affiliated Colleges: