TY - GEN
T1 - Learning-based offloading of tasks with diverse delay sensitivities for mobile edge computing
AU - Zhang, Tianyu
AU - Chiang, Yi Han
AU - Borcea, Cristian
AU - Ji, Yusheng
PY - 2019/12
Y1 - 2019/12
N2 - The ever-evolving mobile applications need more and more computing resources to smooth user experience and sometimes meet delay requirements. Therefore, mobile devices (MDs) are gradually having difficulties to complete all tasks in time due to the limitations of computing power and battery life. To cope with this problem, mobile edge computing (MEC) systems were created to help with task processing for MDs at nearby edge servers. Existing works have been devoted to solving MEC task offloading problems, including those with simple delay constraints, but most of them neglect the coexistence of deadline-constrained and delay- sensitive tasks (i.e., the diverse delay sensitivities of tasks). In this paper, we propose an actor-critic based deep reinforcement learning (ADRL) model that takes the diverse delay sensitivities into account and offloads tasks adaptively to minimize the total penalty caused by deadline misses of deadline-constrained tasks and the lateness of delay-sensitive tasks. We train the ADRL model using a real data set that consists of the diverse delay sensitivities of tasks. Our simulation results show that the proposed solution outperforms several heuristic algorithms in terms of total penalty, and it also retains its performance gains under different system settings.
AB - The ever-evolving mobile applications need more and more computing resources to smooth user experience and sometimes meet delay requirements. Therefore, mobile devices (MDs) are gradually having difficulties to complete all tasks in time due to the limitations of computing power and battery life. To cope with this problem, mobile edge computing (MEC) systems were created to help with task processing for MDs at nearby edge servers. Existing works have been devoted to solving MEC task offloading problems, including those with simple delay constraints, but most of them neglect the coexistence of deadline-constrained and delay- sensitive tasks (i.e., the diverse delay sensitivities of tasks). In this paper, we propose an actor-critic based deep reinforcement learning (ADRL) model that takes the diverse delay sensitivities into account and offloads tasks adaptively to minimize the total penalty caused by deadline misses of deadline-constrained tasks and the lateness of delay-sensitive tasks. We train the ADRL model using a real data set that consists of the diverse delay sensitivities of tasks. Our simulation results show that the proposed solution outperforms several heuristic algorithms in terms of total penalty, and it also retains its performance gains under different system settings.
KW - Actor-critic method
KW - Deep reinforcement learning
KW - Diverse delay sensitivities
KW - Mobile edge computing
KW - Task offloading
UR - http://www.scopus.com/inward/record.url?scp=85081968250&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85081968250&partnerID=8YFLogxK
U2 - 10.1109/GLOBECOM38437.2019.9013498
DO - 10.1109/GLOBECOM38437.2019.9013498
M3 - Conference contribution
T3 - 2019 IEEE Global Communications Conference, GLOBECOM 2019 - Proceedings
BT - 2019 IEEE Global Communications Conference, GLOBECOM 2019 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 IEEE Global Communications Conference, GLOBECOM 2019
Y2 - 9 December 2019 through 13 December 2019
ER -