TY - JOUR
T1 - Stacked Auto Encoder Based Deep Reinforcement Learning for Online Resource Scheduling in Large-Scale MEC Networks
AU - Jiang, Feibo
AU - Wang, Kezhi
AU - Dong, Li
AU - Pan, Cunhua
AU - Yang, Kun
N1 - Research funded by National Natural Science Foundation of China (41604117, 41904127, 41874148, 61701179, 6162010601), Engineering and Physical Sciences Research Council (NIRVANA (EP/L026031/1)), Scientific Research Fund of Hunan Provincial Education Department in China (18A031), Hunan Provincial Science Technology Project Foundation (2018TP1018 and 2018RS3065)
PY - 2020/10/1
Y1 - 2020/10/1
N2 - An online resource scheduling framework is proposed for minimizing the sum of weighted task latency for all the Internet-of-Things (IoT) users, by optimizing offloading decision, transmission power, and resource allocation in the large-scale mobile-edge computing (MEC) system. Toward this end, a deep reinforcement learning (DRL)-based solution is proposed, which includes the following components. First, a related and regularized stacked autoencoder (2r-SAE) with unsupervised learning is applied to perform data compression and representation for high-dimensional channel quality information (CQI) data, which can reduce the state space for DRL. Second, we present an adaptive simulated annealing approach (ASA) as the action search method of DRL, in which an adaptive {h}-mutation is used to guide the search direction and an adaptive iteration is proposed to enhance the search efficiency during the DRL process. Third, a preserved and prioritized experience replay (2p-ER) is introduced to assist the DRL to train the policy network and find the optimal offloading policy. The numerical results are provided to demonstrate that the proposed algorithm can achieve near-optimal performance while significantly decreasing the computational time compared with existing benchmarks.
AB - An online resource scheduling framework is proposed for minimizing the sum of weighted task latency for all the Internet-of-Things (IoT) users, by optimizing offloading decision, transmission power, and resource allocation in the large-scale mobile-edge computing (MEC) system. Toward this end, a deep reinforcement learning (DRL)-based solution is proposed, which includes the following components. First, a related and regularized stacked autoencoder (2r-SAE) with unsupervised learning is applied to perform data compression and representation for high-dimensional channel quality information (CQI) data, which can reduce the state space for DRL. Second, we present an adaptive simulated annealing approach (ASA) as the action search method of DRL, in which an adaptive {h}-mutation is used to guide the search direction and an adaptive iteration is proposed to enhance the search efficiency during the DRL process. Third, a preserved and prioritized experience replay (2p-ER) is introduced to assist the DRL to train the policy network and find the optimal offloading policy. The numerical results are provided to demonstrate that the proposed algorithm can achieve near-optimal performance while significantly decreasing the computational time compared with existing benchmarks.
KW - Adaptive simulated annealing
KW - deep reinforcement learning (DRL)
KW - large-scale mobile-edge computing (MEC)
KW - stacked autoencoder
UR - http://www.scopus.com/inward/record.url?scp=85086279875&partnerID=8YFLogxK
U2 - 10.1109/jiot.2020.2988457
DO - 10.1109/jiot.2020.2988457
M3 - Article
SN - 2327-4662
VL - 7
SP - 9278
EP - 9290
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 10
M1 - 9070170
ER -