Reinforcement Learning-Based Fault-Tolerant Control for Quadrotor UAVs Under Actuator Fault

Xiaoxu Liu, Zike Yuan, Zhiwei Gao*, Wenwei Zhang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

7 Downloads (Pure)

Abstract

Quadrotor UAVs, renowned for their agility and versatility, are extensively utilized in a range areas. However, their inherent underactuated dynamic characteristics render them particularly vulnerable to external disturbances and systemic failures. To address this issue, our study introduces a hybrid control method tailored to combat the most prevalent types of drone failures—actuator faults. This innovative approach leverages reinforcement learning to enhance fault tolerance. Specifically, we employ reinforcement learning techniques to output compensatory control signals that bolster the core functionalities of the base controller. This integration aims to preserve the stability and continuity of mission-critical tasks even in the face of operational faults, thereby ensuring robust safety controls. We utilized the proximal policy optimization algorithm for the strategic training of our control systems. We test in both simulated environments and real-world scenarios was conducted to evaluate the efficacy of our control strategy under conditions of actuator failure. The results affirm that our method significantly enhances the safety and stability of drone operations, maintaining control integrity during rotor failures.
Original languageEnglish
Pages (from-to)1-10
Number of pages10
JournalIEEE Transactions on Industrial Informatics
Early online date4 Sept 2024
DOIs
Publication statusE-pub ahead of print - 4 Sept 2024

Keywords

  • Actuator fault
  • fault-tolerant control
  • proximal policy optimization (PPO)
  • quadrator UAV
  • reinforcement learning

Cite this