Abstract
Cloud computing providers face several challenges in precisely forecasting large-scale workload and resource time series. Such prediction can help them to achieve intelligent resource allocation for guaranteeing that users’ performance needs are strictly met with no waste of computing, network and storage resources. This work applies a logarithmic operation to reduce the standard deviation before smoothing workload and resource sequences. Then, noise interference and extreme points are removed via a powerful filter. A Min–Max scaler is adopted to standardize the data. An integrated method of deep learning for prediction of time series is designed. It incorporates network models including both bi-directional and grid long short-term memory network to achieve high-quality prediction of workload and resource time series. The experimental comparison demonstrates that the prediction accuracy of the proposed method is better than several widely adopted approaches by using datasets of Google cluster trace.
Original language | English (US) |
---|---|
Pages (from-to) | 35-48 |
Number of pages | 14 |
Journal | Neurocomputing |
Volume | 424 |
DOIs | |
State | Published - Feb 1 2021 |
All Science Journal Classification (ASJC) codes
- Computer Science Applications
- Cognitive Neuroscience
- Artificial Intelligence
Keywords
- BG-LSTM
- Cloud data centers
- Deep learning
- Hybrid prediction
- Savitzky–Golay filter