Abstract
An aspect of User friendly AI involves explanation and better transparency of AI. Explainable AI(XAI) is an emerging area of research dedicated to explain and elucidate AI systems. In order to accomplish such an explanation, XAI uses a variety of tools, devices and frameworks. However, some of these tools may prove complex or ambiguous in themselves, requiring explanation. Visualization is such a tool used extensively in XAI. In this paper, we examine how such tools can be complex and ambiguous in itself and thus distort the originally intended AI explanation. We further propose three broad ways to mitigate the risks associated with tools, devices and frameworks used in XAI systems.
| Original language | English |
|---|---|
| Journal | Lecture Notes in Informatics (LNI), Proceedings - Series of the Gesellschaft fur Informatik (GI) |
| DOIs | |
| Publication status | Published - 18 Aug 2020 |
| Externally published | Yes |
| Event | Mensch und Computer 2020, MuC 2020 - Workshop on 7. Mensch-Maschine-Interaktion in sicherheitskritischen Systemen - Human and Computer 2020, MuC 2020 - Workshop on the 7th Human-Machine Interaction in Safety-Critical Systems - Magdeburg, Germany Duration: 6 Sept 2020 → 9 Sept 2020 |
Keywords
- Explainable AI
- Visualization