Abstract
Fog-aided mobile IoT is proposed to speed up service response by deploying fog nodes at network edges. We investigate the task allocation in fog-aided mobile IoT networks, where mobile users generate computing tasks at different locations and offload them to fog nodes, i.e., to intelligently distribute tasks to different fog nodes in order to adapt to the varying wireless channel conditions and different fog resources. The objective is to minimize the average task completion time constrained by the mobile device's battery capacity and each task's completion deadline. In practice, future tasks are usually unknown in advance owing to the unpredictable environments and hence an online algorithm is required to make decisions on the fly. Moreover, the local task information may be incomplete and hence historical statistics should be utilized to estimate the most appropriate fog node for the current task. Therefore, we design an online reinforcement learning algorithm to address the two challenges. We also derive and analyze the computational complexity and theoretical bound. Simulation results show that our online algorithm achieves the optimal performance asymptotically, illustrate the performances of our online reinforcement learning algorithm as compared with existing works, and validate the theoretical bound analysis.
Original language | English (US) |
---|---|
Article number | 8917681 |
Pages (from-to) | 556-565 |
Number of pages | 10 |
Journal | IEEE Transactions on Green Communications and Networking |
Volume | 4 |
Issue number | 2 |
DOIs | |
State | Published - Jun 2020 |
All Science Journal Classification (ASJC) codes
- Renewable Energy, Sustainability and the Environment
- Computer Networks and Communications
Keywords
- Internet of Things (IoT)
- Lyapunov optimization
- energy consumption
- fog computing
- online reinforcement learning
- quality of service (QoS)
- task allocation