Abstract
In this paper, we consider a platform of flying mobile edge computing (F-MEC), where unmanned aerial vehicles (UAVs) serve as equipment providing computation resource, and they enable task offloading from user equipment (UE). We aim to minimize energy consumption of all the UEs via optimizing the user association, resource allocation and the trajectory of UAVs. To this end, we first propose a Convex optimizAtion based Trajectory control algorithm (CAT), which solves the problem in an iterative way by using block coordinate descent (BCD) method. Then, to make the real-time decision while taking into account the dynamics of the environment (i.e., UAV may take off from different locations), we propose a deep Reinforcement leArning based Trajectory control algorithm (RAT). In RAT, we apply the Prioritized Experience Replay (PER) to improve the convergence of the training procedure. Different from the convex optimization based algorithm which may be susceptible to the initial points and requires iterations, RAT can be adapted to any taking off points of the UAVs and can obtain the solution more rapidly than CAT once training process has been completed. Simulation results show that the proposed CAT and RAT achieve the considerable performance and both outperform traditional algorithms.
Original language | English |
---|---|
Pages (from-to) | 3536-3550 |
Number of pages | 15 |
Journal | IEEE Transactions on Mobile Computing |
Volume | 21 |
Issue number | 10 |
Early online date | 16 Feb 2021 |
DOIs | |
Publication status | Published - 1 Oct 2022 |
Keywords
- Cats
- Deep Reinforcement Learning
- Mobile Edge Computing
- Radio access technologies
- Reinforcement learning
- Resource management
- Task analysis
- Trajectory
- Trajectory Control
- Unmanned Aerial Vehicle (UAV)
- Unmanned aerial vehicles
- User Association