• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Xu, D. (Xu, D..) | Zhang, B. (Zhang, B..) | Qiu, Q. (Qiu, Q..) | Li, H. (Li, H..) | Guo, H. (Guo, H..) | Wang, B. (Wang, B..)

Indexed by:

EI Scopus SCIE

Abstract:

Abstract: The application of Deep Reinforcement Learning (DRL) has significantly impacted the development of autonomous driving technology in the field of intelligent transportation. However, in mixed traffic scenarios involving both human-driven vehicles (HDVs) and connected and autonomous vehicles (CAVs), challenges arise, particularly concerning information sharing and collaborative control among multiple intelligent agents using DRL. To address this issue, we propose a novel framework, namely Spatial-Temporal Deep Reinforcement Learning (ST-DRL), that enables collaborative control among multiple CAVs in mixed traffic scenarios. Initially, the traffic states involving multiple agents are constructed as graph-formatted data, which is then sequential created to represent continuous time intervals. With the data representation, interactive behaviors and dynamic characteristics among multiple intelligent agents are implicitly captured. Subsequently, to better represent the spatial relationships between vehicles, a graph enabling network is utilize to encode the vehicle states, which can contribute to the improvement of information sharing efficiency among multiple intelligent agents. Additionally, a spatial-temporal feature fusion network module is designed, which integrates graph convolutional networks (GCN) and gated recurrent units (GRU). It can effectively fuse independent spatial-temporal features and further enhance collaborative control performance. Through extensive experiments conducted in the SUMO traffic simulator and comparison with baseline methods, it is demonstrated that the ST-DRL framework achieves higher success rates in mixed traffic scenarios and exhibits better trade-offs between safety and efficiency. The analysis of the results indicates that ST-DRL has increased the success rate of the task by 15.6% compared to the baseline method, while reducing model training and task completion times by 26.6% respectively. Graphical abstract: (Figure presented.) © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.

Keyword:

Deep reinforcement learning Graph neural network Gated recurrent unit Autonomous driving

Author Community:

  • [ 1 ] [Xu D.]Department of Institute of Cyberspace Security, Zhejiang University of Technology, 288 Liuhe Road, Xihu District, Zhejiang, Hangzhou, 311121, China
  • [ 2 ] [Zhang B.]Information Systems in GS1 China and Oriental Speedy Code Tech, BeiJing, 100011, China
  • [ 3 ] [Qiu Q.]Department of College of Information Engineering, Zhejiang University of Technology, 288 Liuhe Road, Xihu District, Zhejiang, Hangzhou, 311121, China
  • [ 4 ] [Li H.]College of Metropolitan Transportation, Beijing University of Technology, 100 Pingyuan Village, Boya Road, Chaoyang District, Beijing, 100021, China
  • [ 5 ] [Guo H.]Department of Institute of Cyberspace Security, Zhejiang University of Technology, 288 Liuhe Road, Xihu District, Zhejiang, Hangzhou, 311121, China
  • [ 6 ] [Wang B.]Key Laboratory of Transport Industry of Management, Chang’an University, Shanxi, Xian, 710064, China

Reprint Author's Address:

Email:

Show more details

Related Keywords:

Related Article:

Source :

Applied Intelligence

ISSN: 0924-669X

Year: 2024

Issue: 8

Volume: 54

Page: 6400-6414

5 . 3 0 0

JCR@2022

Cited Count:

WoS CC Cited Count:

SCOPUS Cited Count: 4

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 13

Affiliated Colleges:

Online/Total:880/10803216
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.