Deep Reinforcement Learning Based Dynamic Trajectory Control for UAV-assisted Mobile Edge Computing

Liang Wang, Kezhi Wang*, Cunhua Pan, Wei Xu, Nauman Aslam, Arumugam Nallanathan

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Downloads (Pure)

Abstract

In this paper, we consider a platform of flying mobile edge computing (F-MEC), where unmanned aerial vehicles (UAVs) serve as equipment providing computation resource, and they enable task offloading from user equipment (UE). We aim to minimize energy consumption of all the UEs via optimizing the user association, resource allocation and the trajectory of UAVs. To this end, we first propose a Convex optimizAtion based Trajectory control algorithm (CAT), which solves the problem in an iterative way by using block coordinate descent (BCD) method. Then, to make the real-time decision while taking into account the dynamics of the environment (i.e., UAV may take off from different locations), we propose a deep Reinforcement leArning based Trajectory control algorithm (RAT). In RAT, we apply the Prioritized Experience Replay (PER) to improve the convergence of the training procedure. Different from the convex optimization based algorithm which may be susceptible to the initial points and requires iterations, RAT can be adapted to any taking off points of the UAVs and can obtain the solution more rapidly than CAT once training process has been completed. Simulation results show that the proposed CAT and RAT achieve the similar performance and both outperform traditional algorithms.
Original languageEnglish
Pages (from-to)1-5
Number of pages15
JournalIEEE Transactions on Mobile Computing
Publication statusAccepted/In press - 10 Feb 2021

Fingerprint Dive into the research topics of 'Deep Reinforcement Learning Based Dynamic Trajectory Control for UAV-assisted Mobile Edge Computing'. Together they form a unique fingerprint.

Cite this