TY - JOUR
T1 - Deep Reinforcement Learning Based Dynamic Trajectory Control for UAV-assisted Mobile Edge Computing
AU - Wang, Liang
AU - Wang, Kezhi
AU - Pan, Cunhua
AU - Xu, Wei
AU - Aslam, Nauman
AU - Nallanathan, Arumugam
PY - 2021/2/16
Y1 - 2021/2/16
N2 - In this paper, we consider a platform of flying mobile edge computing (F-MEC), where unmanned aerial vehicles (UAVs) serve as equipment providing computation resource, and they enable task offloading from user equipment (UE). We aim to minimize energy consumption of all the UEs via optimizing the user association, resource allocation and the trajectory of UAVs. To this end, we first propose a Convex optimizAtion based Trajectory control algorithm (CAT), which solves the problem in an iterative way by using block coordinate descent (BCD) method. Then, to make the real-time decision while taking into account the dynamics of the environment (i.e., UAV may take off from different locations), we propose a deep Reinforcement leArning based Trajectory control algorithm (RAT). In RAT, we apply the Prioritized Experience Replay (PER) to improve the convergence of the training procedure. Different from the convex optimization based algorithm which may be susceptible to the initial points and requires iterations, RAT can be adapted to any taking off points of the UAVs and can obtain the solution more rapidly than CAT once training process has been completed. Simulation results show that the proposed CAT and RAT achieve the considerable performance and both outperform traditional algorithms.
AB - In this paper, we consider a platform of flying mobile edge computing (F-MEC), where unmanned aerial vehicles (UAVs) serve as equipment providing computation resource, and they enable task offloading from user equipment (UE). We aim to minimize energy consumption of all the UEs via optimizing the user association, resource allocation and the trajectory of UAVs. To this end, we first propose a Convex optimizAtion based Trajectory control algorithm (CAT), which solves the problem in an iterative way by using block coordinate descent (BCD) method. Then, to make the real-time decision while taking into account the dynamics of the environment (i.e., UAV may take off from different locations), we propose a deep Reinforcement leArning based Trajectory control algorithm (RAT). In RAT, we apply the Prioritized Experience Replay (PER) to improve the convergence of the training procedure. Different from the convex optimization based algorithm which may be susceptible to the initial points and requires iterations, RAT can be adapted to any taking off points of the UAVs and can obtain the solution more rapidly than CAT once training process has been completed. Simulation results show that the proposed CAT and RAT achieve the considerable performance and both outperform traditional algorithms.
KW - Cats
KW - Deep Reinforcement Learning
KW - Mobile Edge Computing
KW - Radio access technologies
KW - Reinforcement learning
KW - Resource management
KW - Task analysis
KW - Trajectory
KW - Trajectory Control
KW - Unmanned Aerial Vehicle (UAV)
KW - Unmanned aerial vehicles
KW - User Association
UR - http://www.scopus.com/inward/record.url?scp=85100939295&partnerID=8YFLogxK
U2 - 10.1109/tmc.2021.3059691
DO - 10.1109/tmc.2021.3059691
M3 - Article
SP - 1
EP - 5
JO - IEEE Transactions on Mobile Computing
JF - IEEE Transactions on Mobile Computing
SN - 1536-1233
ER -