Multiuser resource control with deep reinforcement learning in IoT edge computing

Lei, Lei, Xu, Huijuan, Xiong, Xiong, Zheng, Kan, Xiang, Wei, and Wang, Xianbin (2019) Multiuser resource control with deep reinforcement learning in IoT edge computing. IEEE Internet of Things Journal, 6 (6). pp. 10119-10133.

[img] PDF (Published Version) - Published Version
Restricted to Repository staff only

View at Publisher Website:


By leveraging the concept of mobile edge computing (MEC), massive amount of data generated by a large number of Internet of Things (IoT) devices could be offloaded to MEC server at the edge of wireless network for further computational intensive processing. However, due to the resource constraint of IoT devices and wireless network, both communications and computation resources need to be allocated and scheduled efficiently for better system performance. In this article, we propose a joint computation offloading and multiuser scheduling algorithm for IoT edge computing system to minimize the long-term average weighted sum of delay and power consumption under stochastic traffic arrival. We formulate the dynamic optimization problem as an infinite-horizon average-reward continuous-time Markov decision process (CTMDP) model. One critical challenge in solving this MDP problem for the multiuser resource control is the curse-of-dimensionality problem, where the state space of the MDP model and the computation complexity increase exponentially with the growing number of users or IoT devices. In order to overcome this challenge, we use the deep reinforcement learning (RL) techniques and propose a neural network architecture to approximate the value functions for the post-decision system states. The designed algorithm to solve the CTMDP problem supports semidistributed auction-based implementation, where the IoT devices submit bids to the BS to make the resource control decisions centrally. The simulation results show that the proposed algorithm provides significant performance improvement over the baseline algorithms, and also outperforms the RL algorithms based on other neural network architectures.

Item ID: 61419
Item Type: Article (Research - C1)
ISSN: 2327-4662
Keywords: Deep reinforcement learning (DRL), Internet of Things (IoT), mobile edge computing (MEC)
Copyright Information: © 2019 IEEE.
Funders: National Natural Science Foundation of China
Date Deposited: 15 Jan 2020 07:48
FoR Codes: 40 ENGINEERING > 4006 Communications engineering > 400608 Wireless communication systems and technologies (incl. microwave and millimetrewave) @ 100%
Downloads: Total: 1
More Statistics

Actions (Repository Staff Only)

Item Control Page Item Control Page