DiPACE: Diverse, Plausible and Actionable Counterfactual Explanations

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Downloads (Pure)

Abstract

As Artificial Intelligence (AI) becomes integral to high-stakes applications, the need for interpretable and trustworthy decision-making tools is increasingly essential. Counterfactual Explanations (CFX) offer an effective approach, allowing users to explore “what if?” scenarios that highlight actionable changes for achieving more desirable outcomes. Existing CFX methods often prioritize select qualities, such as diversity, plausibility, proximity, or sparsity, but few balance all four in a flexible way. This work introduces DiPACE, a practical CFX framework that balances these qualities while allowing users to adjust parameters according to specific application needs. DiPACE also incorporates a penalty-based adjustment to refine results toward user-defined thresholds. Experimental results on real-world datasets demonstrate that DiPACE consistently outperforms existing methods Wachter, DiCE and CARE in achieving diverse, realistic, and actionable CFs, with strong performance across a ll four characteristics. The findings confirm DiPACE’s utility as a user-adaptable, interpretable CFX tool suitable for diverse AI applications, with a robust balance of qualities that enhances both feasibility and trustworthiness in decision-making contexts.
Original languageEnglish
Title of host publicationProceedings of the 17th International Conference on Agents and Artificial Intelligence
EditorsAna Paula Rocha, Luc Steels, H. Jaap van den Herik
PublisherScitepress
Pages543-554
Number of pages12
Volume2
ISBN (Electronic)9789897587375
DOIs
Publication statusPublished - 25 Feb 2025

Publication series

NameInternational Conference on Agents and Artificial Intelligence
PublisherScience and Technology Publications, Lda
ISSN (Print)2184-3589
ISSN (Electronic)2184-433X

Keywords

  • Explainable Artificial Intelligence (XAI)
  • Counterfactual Explanations
  • Interpretable Machine Learning

Cite this