Indexed by:
Abstract:
With the rapid development of cloud computing technologies, more and more individual users and enterprises choose to deploy their key applications in green data centers (GDCs), and the scale of GDCs is increasing rapidly. To ensure service quality and maximize the revenue, cloud service providers in GDCs need to reasonably and efficiently allocate computing resources and schedule tasks of users. Traditional heuristic algorithms face challenges of uncertainty and complexity in GDCs for scheduling tasks. To solve them, this work establishes an improved resource allocation and task scheduling method based on deep reinforcement learning. It considers the dependency among different tasks, and builds a workload model based on the real-life data in Google cluster trace. In addition, a deep reinforcement learning-based scheduling model is proposed to reasonably allocate and schedule resources (CPU and memory) in GDCs. Based on two models, an Improved Deep Q-learning Network (IDQN) is proposed to autonomously learn the changing environment of GDCs, and yield the optimal strategy for resource allocation and task scheduling. Real-life data-based experiments demonstrate that IDQN achieves lower task rejection rates and energy cost than several typical task scheduling methods. © 2022 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
ISSN: 1062-922X
Year: 2022
Volume: 2022-October
Page: 556-561
Language: English
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 2
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 8
Affiliated Colleges: