Indexed by:
Abstract:
Cloud computing providers face several challenges in precisely forecasting large-scale workload and resource time series. Such prediction can help them to achieve intelligent resource allocation for guaranteeing that users’ performance needs are strictly met with no waste of computing, network and storage resources. This work applies a logarithmic operation to reduce the standard deviation before smoothing workload and resource sequences. Then, noise interference and extreme points are removed via a powerful filter. A Min–Max scaler is adopted to standardize the data. An integrated method of deep learning for prediction of time series is designed. It incorporates network models including both bi-directional and grid long short-term memory network to achieve high-quality prediction of workload and resource time series. The experimental comparison demonstrates that the prediction accuracy of the proposed method is better than several widely adopted approaches by using datasets of Google cluster trace. © 2020 Elsevier B.V.
Keyword:
Reprint Author's Address:
Email:
Source :
Neurocomputing
ISSN: 0925-2312
Year: 2021
Volume: 424
Page: 35-48
6 . 0 0 0
JCR@2022
ESI Discipline: COMPUTER SCIENCE;
ESI HC Threshold:87
JCR Journal Grade:2
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 118
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 10
Affiliated Colleges: