TY - GEN
T1 - DiPACE: Diverse, Plausible and Actionable Counterfactual Explanations
AU - Sanderson, Jacob
AU - Mao, Hua
AU - Woo, Wai
PY - 2025/2/25
Y1 - 2025/2/25
N2 - As Artificial Intelligence (AI) becomes integral to high-stakes applications, the need for interpretable and trustworthy decision-making tools is increasingly essential. Counterfactual Explanations (CFX) offer an effective approach, allowing users to explore “what if?” scenarios that highlight actionable changes for achieving more desirable outcomes. Existing CFX methods often prioritize select qualities, such as diversity, plausibility, proximity, or sparsity, but few balance all four in a flexible way. This work introduces DiPACE, a practical CFX framework that balances these qualities while allowing users to adjust parameters according to specific application needs. DiPACE also incorporates a penalty-based adjustment to refine results toward user-defined thresholds. Experimental results on real-world datasets demonstrate that DiPACE consistently outperforms existing methods Wachter, DiCE and CARE in achieving diverse, realistic, and actionable CFs, with strong performance across a ll four characteristics. The findings confirm DiPACE’s utility as a user-adaptable, interpretable CFX tool suitable for diverse AI applications, with a robust balance of qualities that enhances both feasibility and trustworthiness in decision-making contexts.
AB - As Artificial Intelligence (AI) becomes integral to high-stakes applications, the need for interpretable and trustworthy decision-making tools is increasingly essential. Counterfactual Explanations (CFX) offer an effective approach, allowing users to explore “what if?” scenarios that highlight actionable changes for achieving more desirable outcomes. Existing CFX methods often prioritize select qualities, such as diversity, plausibility, proximity, or sparsity, but few balance all four in a flexible way. This work introduces DiPACE, a practical CFX framework that balances these qualities while allowing users to adjust parameters according to specific application needs. DiPACE also incorporates a penalty-based adjustment to refine results toward user-defined thresholds. Experimental results on real-world datasets demonstrate that DiPACE consistently outperforms existing methods Wachter, DiCE and CARE in achieving diverse, realistic, and actionable CFs, with strong performance across a ll four characteristics. The findings confirm DiPACE’s utility as a user-adaptable, interpretable CFX tool suitable for diverse AI applications, with a robust balance of qualities that enhances both feasibility and trustworthiness in decision-making contexts.
KW - Explainable Artificial Intelligence (XAI)
KW - Counterfactual Explanations
KW - Interpretable Machine Learning
UR - http://www.scopus.com/inward/record.url?scp=105001742017&partnerID=8YFLogxK
U2 - 10.5220/0013219100003890
DO - 10.5220/0013219100003890
M3 - Conference contribution
VL - 2
T3 - International Conference on Agents and Artificial Intelligence
SP - 543
EP - 554
BT - Proceedings of the 17th International Conference on Agents and Artificial Intelligence
A2 - Rocha, Ana Paula
A2 - Steels, Luc
A2 - van den Herik, H. Jaap
PB - Scitepress
ER -