GradCFA: A Hybrid Gradient-Based Counterfactual and Feature Attribution Explanation Algorithm for Local Interpretation of Neural Networks

Research output: Contribution to journalArticlepeer-review

1 Downloads (Pure)

Abstract

Explainable Artificial Intelligence (XAI) is increasingly essential as AI systems are deployed in critical fields such as healthcare and finance, offering transparency into AI-driven decisions. Two major XAI paradigms, counterfactual explanations (CFX) and feature attribution (FA), serve distinct roles in model interpretability. This study introduces GradCFA, a hybrid framework combining CFX and FA to improve interpretability by explicitly optimizing feasibility, plausibility, and diversity—key qualities often unbalanced in existing methods. Unlike most CFX research focused on binary classification, GradCFA extends to multi-class scenarios, supporting a wider range of applications. We evaluate GradCFA’s validity, proximity, sparsity, plausibility, and diversity against state-of-the-art methods, including Wachter, DiCE, CARE for CFX, and SHAP for FA. Results show GradCFA effectively generates feasible, plausible, and diverse counterfactuals while offering valuable FA insights. By identifying influential features and validating their impact, GradCFA advances AI interpretability.
Original languageEnglish
Pages (from-to)1-13
Number of pages13
JournalIEEE Transactions on Artificial Intelligence
Early online date18 Mar 2025
DOIs
Publication statusE-pub ahead of print - 18 Mar 2025

Keywords

  • Counterfactual Explanations
  • Feature Attribution
  • Explainable AI
  • Interpretable Machine Learning

Cite this