Indexed by:
Abstract:
The performance of unmanned ground vehicle (UGV) formation is crucial for large-scale material transport. In a non-communicative environment, visual perception plays a central role in formation control. However, due to unstable lighting conditions, dust, fog, and visual occlusions, developing a high-precision visual formation control technology that does not rely on external markers remains a significant challenge in UGVs. This study developed a new UGV formation controller that relies solely on onboard visual sensors and proposed a teacher-student training method, TSTMIPI, combining the PPO algorithm with imitation learning, which significantly improves the control precision and convergence speed of the vision-based reinforcement learning formation controller. To further enhance formation control stability, we constructed a belief state encoder (BSE) based on convolutional neural networks, which effectively integrates visual perception and proprioceptive information. Simulation results show that the control strategy combining TSTMIPI and BSE not only eliminates the reliance on external markers but also significantly improves control precision under different noise levels and visual occlusion conditions, surpassing existing visual formation control methods in maintaining the desired distance and angular precision.
Keyword:
Reprint Author's Address:
Source :
DRONES
Year: 2024
Issue: 12
Volume: 8
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 4
Affiliated Colleges: