Indexed by:
Abstract:
Task scheduling is one of the key technologies in edge computing. End devices can significantly improve the quality of service by offloading some latency-sensitive tasks to edge servers, but a large amount of power and compute units are wasted. Therefore, this paper proposes a two-stage task offloading approach to ensure low latency while reducing the energy consumption of edge computing units and cloud computing centers. The mobile edge computing environment contains edge computing nodes as well as cloud computing centers. A two-stage processing mechanism based on deep Q-learning is used to automatically generate optimal long-term scheduling decisions that reduce power consumption while ensuring quality of service. Imitation learning is also used in the reinforcement learning process to reduce the training time of the optimal policy. To evaluate the effectiveness of the model, we use the Shortest job first (SJF) algorithm and the Heterogeneous Earliest Finish Time (HEFT) into as comparison algorithms, comparing the running time and energy consumption as a measure. Our proposed algorithm has 13% more running time but 34% lower average energy consumption compared to other algorithms. © 2022, Springer Nature Singapore Pte Ltd.
Keyword:
Reprint Author's Address:
Email:
Source :
ISSN: 1865-0929
Year: 2022
Volume: 1566 CCIS
Page: 222-231
Language: English
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 4
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 7
Affiliated Colleges: