• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Ying, Chengshuo (Ying, Chengshuo.) | Chow, Andy H. F. (Chow, Andy H. F..) | Yan, Yimo (Yan, Yimo.) | Kuo, Yong-Hong (Kuo, Yong-Hong.) | Wang, Shouyang (Wang, Shouyang.)

Indexed by:

SSCI EI Scopus SCIE

Abstract:

This paper presents a novel multi-agent deep reinforcement learning (MADRL) approach for real-time rescheduling of rail transit services with short-turnings during a complete track blockage on a double-track service corridor. The optimization problem is modeled as a Markov decision process with multiple control agents rescheduling train services on each directional line for system recovery. To ensure computational efficacy, we employ a multi-agent policy optimization solution framework in which each control agent employs a decentralized policy function for deriving local decisions and a centralized value function approximation (VFA) estimating global system state values. Both the policy functions and VFAs are represented by multi-layer artificial neural networks (ANNs). A multi-agent proximal policy optimization gradient algorithm is developed for training the policies and VFAs through iterative simulated system transitions. The proposed framework is implemented and tested with real-world scenarios with data collected from London Underground, UK. Computational results demonstrate the superiority of the developed framework in computational effectiveness compared with previous distributed control algorithms and conventional metaheuristic methods. We also provide managerial implications for train rescheduling during disruptions with different durations, locations, and passenger behaviors. Additional experiments show the scalability of the proposed MADRL framework in managing disruptions with uncertain durations with a generalized model. This study contributes to real-time rail transit management with innovative control and optimization techniques.

Keyword:

Proximal policy optimization Train rescheduling Short-turning Markov decision process Multi-agent deep reinforcement learning

Author Community:

  • [ 1 ] [Ying, Chengshuo]Beijing Univ Technol, Coll Metropolitan Transportat, Beijing, Peoples R China
  • [ 2 ] [Ying, Chengshuo]Chinese Acad Sci, Acad Math & Syst Sci, Beijing 100190, Peoples R China
  • [ 3 ] [Wang, Shouyang]Chinese Acad Sci, Acad Math & Syst Sci, Beijing 100190, Peoples R China
  • [ 4 ] [Chow, Andy H. F.]City Univ Hong Kong, Dept Syst Engn & Engn Management, Kowloon Tong, Hong Kong, Peoples R China
  • [ 5 ] [Yan, Yimo]Univ Hong Kong, Dept Ind & Mfg Syst Engn, Hong Kong, Peoples R China
  • [ 6 ] [Kuo, Yong-Hong]Univ Hong Kong, Dept Ind & Mfg Syst Engn, Hong Kong, Peoples R China
  • [ 7 ] [Wang, Shouyang]Univ Chinese Acad Sci, Sch Econ & Management, Beijing 100190, Peoples R China
  • [ 8 ] [Wang, Shouyang]Tech Univ, Sch Entrepreneurship & Management, Shanghai 201210, Peoples R China

Reprint Author's Address:

  • [Chow, Andy H. F.]City Univ Hong Kong, Dept Syst Engn & Engn Management, Kowloon Tong, Hong Kong, Peoples R China;;

Show more details

Related Keywords:

Source :

TRANSPORTATION RESEARCH PART B-METHODOLOGICAL

ISSN: 0191-2615

Year: 2024

Volume: 188

6 . 8 0 0

JCR@2022

Cited Count:

WoS CC Cited Count: 1

SCOPUS Cited Count: 2

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 5

Affiliated Colleges:

Online/Total:663/10636910
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.