• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Zhang, J. (Zhang, J..) | Ma, N. (Ma, N..) | Wu, Z. (Wu, Z..) | Wang, C. (Wang, C..) | Yao, Y. (Yao, Y..)

Indexed by:

EI Scopus

Abstract:

Due to the complexity of the driving environment and the dynamics of the behavior of traffic participants, self-driving in dense traffic flow is very challenging. Traditional methods usually rely on predefined rules, which are difficult to adapt to various driving scenarios. Deep reinforcement learning (DRL) shows advantages over rule-based methods in complex self-driving environments, demonstrating the great potential of intelligent decision-making. However, one of the problems of DRL is the inefficiency of exploration; typically, it requires a lot of trial and error to learn the optimal policy, which leads to its slow learning rate and makes it difficult for the agent to learn well-performing decision-making policies in self-driving scenarios. Inspired by the outstanding performance of supervised learning in classification tasks, we propose a self-driving intelligent control method that combines human driving experience and adaptive sampling supervised actor-critic algorithm. Unlike traditional DRL, we modified the learning process of the policy network by combining supervised learning and DRL and adding human driving experience to the learning samples to better guide the self-driving vehicle to learn the optimal policy through human driving experience and real-time human guidance. In addition, in order to make the agent learn more efficiently, we introduced real-time human guidance in its learning process, and an adaptive balanced sampling method was designed for improving the sampling performance. We also designed the reward function in detail for different evaluation indexes such as traffic efficiency, which further guides the agent to learn the self-driving intelligent control policy in a better way. The experimental results show that the method is able to control vehicles in complex traffic environments for self-driving tasks and exhibits better performance than other DRL methods. © 2024 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)

Keyword:

actor-critic self-driving deep reinforcement learning intelligent control

Author Community:

  • [ 1 ] [Zhang J.]Beijing Key Laboratory of Information Service Engineering, Beijing Union University, Beijing, 100101, China
  • [ 2 ] [Ma N.]Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, China
  • [ 3 ] [Wu Z.]Beijing University of Posts and Telecommunications, Beijing, 100876, China
  • [ 4 ] [Wang C.]Beijing Key Laboratory of Information Service Engineering, Beijing Union University, Beijing, 100101, China
  • [ 5 ] [Yao Y.]Beijing Shuncheng High Technology Corporation, Beijing, 102206, China

Reprint Author's Address:

Email:

Show more details

Related Keywords:

Related Article:

Source :

Mathematical Biosciences and Engineering

ISSN: 1547-1063

Year: 2024

Issue: 5

Volume: 21

Page: 6077-6096

2 . 6 0 0

JCR@2022

Cited Count:

WoS CC Cited Count:

SCOPUS Cited Count:

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 10

Affiliated Colleges:

Online/Total:280/10620953
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.