Explaining Explainable AI

Swaroop Panda, Shatarupa Thakurta Roy

Research output: Contribution to journalConference articlepeer-review

Abstract

An aspect of User friendly AI involves explanation and better transparency of AI. Explainable AI(XAI) is an emerging area of research dedicated to explain and elucidate AI systems. In order to accomplish such an explanation, XAI uses a variety of tools, devices and frameworks. However, some of these tools may prove complex or ambiguous in themselves, requiring explanation. Visualization is such a tool used extensively in XAI. In this paper, we examine how such tools can be complex and ambiguous in itself and thus distort the originally intended AI explanation. We further propose three broad ways to mitigate the risks associated with tools, devices and frameworks used in XAI systems.

Keywords

  • Explainable AI
  • Visualization

Fingerprint

Dive into the research topics of 'Explaining Explainable AI'. Together they form a unique fingerprint.

Cite this