• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Zhang, Yongnan (Zhang, Yongnan.) | Zhou, Yonghua (Zhou, Yonghua.) | Fujita, Hamido (Fujita, Hamido.)

Indexed by:

EI Scopus SCIE

Abstract:

The escalating air pollution resulting from traffic congestion has necessitated a shift in traffic control strategies towards green and low-carbon objectives. In this study, a graph convolutional network and self-attention value decomposition-based multi-agent actor-critic (GSAVD-MAC) approach is proposed to cooperative control traffic network flow, where vehicle carbon emission and traffic efficiency are considered as reward functions to minimize carbon emissions and traffic congestions. In this method, we design a local coordination mechanism based on graph convolutional network to guide the multi-agent decision-making process by extracting spatial topology and traffic flow characteristics between adjacent intersections. This enables distributed agents to make low-carbon decisions which not only account for their own interactions with the environment but also consider local cooperation with neighboring agents. Further, we design a global coordination mechanism based on self-attention value decomposition to guide multi-agent learning process by assigning various weights to distributed agents with respect to their contribution degrees. This enables distributed agents to learn a globally optimal low-carbon control strategy in a cooperative and adaptive manner. In addition, we design a cloud computing-based parallel optimization algorithm for the GSAVD-MAC model to reduce calculation time costs. Simulation experiments based on real road networks have verified the advantages of the proposed method in terms of computational efficiency and control performance.

Keyword:

Process control self-attention value decomposition Roads traffic network flow low-carbon control Carbon dioxide Training parallel optimization Computational modeling Optimization graph convolutional network Distributed multi-agent reinforcement learning Decision making

Author Community:

  • [ 1 ] [Zhang, Yongnan]Beijing Univ Technol, Coll Metropolitan Transportat, Beijing Key Lab Traff Engn, Beijing 100124, Peoples R China
  • [ 2 ] [Zhou, Yonghua]Beijing Jiaotong Univ, Sch Automation & Intelligence, Beijing 100044, Peoples R China
  • [ 3 ] [Fujita, Hamido]Univ Teknol Malaysia, Malaysia Japan Int Inst Technol MJIIT, Kuala Lumpur 54100, Malaysia
  • [ 4 ] [Fujita, Hamido]Univ Granada, Andalusian Res Inst Data Sci & Computat Intelligen, Granada 18012, Spain
  • [ 5 ] [Fujita, Hamido]Iwate Prefectural Univ, Reg Res Ctr, Takizawa020-0693, Takizawa, Japan

Reprint Author's Address:

  • [Zhou, Yonghua]Beijing Jiaotong Univ, Sch Automation & Intelligence, Beijing 100044, Peoples R China;;

Show more details

Related Keywords:

Source :

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS

ISSN: 1524-9050

Year: 2024

8 . 5 0 0

JCR@2022

Cited Count:

WoS CC Cited Count:

SCOPUS Cited Count: 1

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 14

Affiliated Colleges:

Online/Total:141/10588486
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.