• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Xu, Dongwei (Xu, Dongwei.) | Zhang, Biao (Zhang, Biao.) | Qiu, Qingwei (Qiu, Qingwei.) | Li, Haijian (Li, Haijian.) | Guo, Haifeng (Guo, Haifeng.) | Wang, Baojie (Wang, Baojie.)

Indexed by:

EI Scopus SCIE

Abstract:

The application of Deep Reinforcement Learning (DRL) has significantly impacted the development of autonomous driving technology in the field of intelligent transportation. However, in mixed traffic scenarios involving both human-driven vehicles (HDVs) and connected and autonomous vehicles (CAVs), challenges arise, particularly concerning information sharing and collaborative control among multiple intelligent agents using DRL. To address this issue, we propose a novel framework, namely Spatial-Temporal Deep Reinforcement Learning (ST-DRL), that enables collaborative control among multiple CAVs in mixed traffic scenarios. Initially, the traffic states involving multiple agents are constructed as graph-formatted data, which is then sequential created to represent continuous time intervals. With the data representation, interactive behaviors and dynamic characteristics among multiple intelligent agents are implicitly captured. Subsequently, to better represent the spatial relationships between vehicles, a graph enabling network is utilize to encode the vehicle states, which can contribute to the improvement of information sharing efficiency among multiple intelligent agents. Additionally, a spatial-temporal feature fusion network module is designed, which integrates graph convolutional networks (GCN) and gated recurrent units (GRU). It can effectively fuse independent spatial-temporal features and further enhance collaborative control performance. Through extensive experiments conducted in the SUMO traffic simulator and comparison with baseline methods, it is demonstrated that the ST-DRL framework achieves higher success rates in mixed traffic scenarios and exhibits better trade-offs between safety and efficiency. The analysis of the results indicates that ST-DRL has increased the success rate of the task by 15.6% compared to the baseline method, while reducing model training and task completion times by 26.6% respectively.

Keyword:

Graph neural network Autonomous driving Deep reinforcement learning Gated recurrent unit

Author Community:

  • [ 1 ] [Xu, Dongwei]Zhejiang Univ Technol, Dept Inst Cyberspace Secur, 288 Liuhe Rd, Hangzhou 311121, Zhejiang, Peoples R China
  • [ 2 ] [Guo, Haifeng]Zhejiang Univ Technol, Dept Inst Cyberspace Secur, 288 Liuhe Rd, Hangzhou 311121, Zhejiang, Peoples R China
  • [ 3 ] [Zhang, Biao]Informat Syst GS1 China & Oriental Speedy Code Tec, Beijing 100011, Peoples R China
  • [ 4 ] [Qiu, Qingwei]Zhejiang Univ Technol, Dept Coll Informat Engn, 288 Liuhe Rd, Hangzhou 311121, Peoples R China
  • [ 5 ] [Li, Haijian]Beijing Univ Technol, Coll Metropolitan Transportat, 100 Pingyuan Village,Boya Rd, Beijing 100021, Peoples R China
  • [ 6 ] [Wang, Baojie]Changan Univ, Key Lab Transport Ind Management, Xian 710064, Peoples R China

Reprint Author's Address:

  • [Wang, Baojie]Changan Univ, Key Lab Transport Ind Management, Xian 710064, Peoples R China;;

Show more details

Related Keywords:

Source :

APPLIED INTELLIGENCE

ISSN: 0924-669X

Year: 2024

Issue: 8

Volume: 54

Page: 6400-6414

5 . 3 0 0

JCR@2022

Cited Count:

WoS CC Cited Count:

SCOPUS Cited Count:

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 1

Affiliated Colleges:

Online/Total:1304/10892332
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.