Explainable Fault Diagnosis of Control Systems Using Large Language Models

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

39 Downloads (Pure)

Abstract

In control systems, accurate and timely diagnosis of malfunctions can ensure the safe and efficient operation of the systems. Although several methods have been proposed for process anomaly detection, including multivariate statistical process control, most of these models are built on several statistical assumptions that limit their applications. When these models detect faults with high accuracy, questions such as “How did the model achieve this outcome?” “Are the predictions valid?” and “Do the outcomes reveal novel information?” often race through our minds. Therefore, to ensure explainability in fault diagnosis of control systems, we propose a causal-based large language model that can potentially answer cause-andeffect questions within these systems, ensuring transparency, interpretability, and robustness.
Original languageEnglish
Title of host publication2024 IEEE Conference on Control Technology and Applications (CCTA)
Place of PublicationPiscataway, US
PublisherIEEE
Pages491-498
Number of pages8
ISBN (Electronic)9798350370942
ISBN (Print)9798350370959
DOIs
Publication statusPublished - 21 Aug 2024
Event2024 IEEE Conference on Control Technology and Applications (CCTA) - Northumbria University, Newcastle upon Tyne, United Kingdom
Duration: 21 Aug 202423 Aug 2024
https://ccta2024.ieeecss.org/

Publication series

NameIEEE Conference on Control Technology and Applications (CCTA)
PublisherIEEE
ISSN (Print)2768-0762
ISSN (Electronic)2768-0770

Conference

Conference2024 IEEE Conference on Control Technology and Applications (CCTA)
Abbreviated titleCCTA 2024
Country/TerritoryUnited Kingdom
CityNewcastle upon Tyne
Period21/08/2423/08/24
Internet address

Keywords

  • control
  • fault diagnosis
  • large language models
  • causal inference
  • explainable AI

Cite this